0% found this document useful (0 votes)
9 views

MDU BCA- Operating System

This document serves as a comprehensive guide to Operating Systems, covering fundamental concepts, process management, CPU scheduling, memory management, file management, and device management. It outlines the definition, functions, and types of operating systems, along with their advantages and disadvantages. Additionally, it discusses the kernel's role, differences between firmware and operating systems, and the needs and services provided by operating systems.

Uploaded by

amanverma5144
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

MDU BCA- Operating System

This document serves as a comprehensive guide to Operating Systems, covering fundamental concepts, process management, CPU scheduling, memory management, file management, and device management. It outlines the definition, functions, and types of operating systems, along with their advantages and disadvantages. Additionally, it discusses the kernel's role, differences between firmware and operating systems, and the needs and services provided by operating systems.

Uploaded by

amanverma5144
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 181

1

MAHARSHI DAYANAND UNIVERSITY

Notes in Easy Language

CREATED BY: ROHIT RATHOUR

CONTACT NO-7827646303
2

MDU BCA-3rd SEMESTER


OPERATING SYSTEM
INDEX
SECTION-I
Fundamentals of Operating Systems:
 Introduction to Operating Systems
 its need and operating System services
 Early systems
 Structures - Simple Batch, Multi
programmed, timeshared
 Personal Computer
 Parallel Computer
 Distributed Systems
 Real Time Systems
Process Management:
 Process concept
 Operation on processes
 Cooperating Processes
 Threads
 Interprocess Communication.
SECTION-II
CPU Scheduling:
 Basic Concept
 Scheduling Criteria
 Scheduling Algorithm:FCFS,SJF,Round
Robin &Queue Algorithm
Deadlocks:
 Deadlock characterization
 Methods for handling deadlocks
3

 Banker’sAlgorithm
SECTION-III
Memory Management:
 Logical versus Physical address space
 Swapping
 Contiguous allocation
 Paging
 Segmentation
Virtual Memory:
 Demand Paging
 Performance of Demand Paging
 Page Replacement
 Page Replacement Algorithm
 Thrasing

SECTION-IV
File Management:
 File System Structure
 Allocation methods: Contiguous allocation,
Linked allocation, Indexed allocation
 Free space management: Bit vector, Linked
list, Grouping, Counting
Device Management:
 Disk structure
 Disk scheduling: FCFS, SSTF, SCAN, C-
SCAN, LOOK, C-LOOK

SECTION-I
4

FUNDAMENTALS OF OPERATING SYSTEM

Introduction to Operating System:

An operating system acts as an intermediary between the user


of a computer and computer hardware. The purpose of an
operating system is to provide an environment in which a user
can execute programs conveniently and efficiently.
An operating system is a software that manages computer
hardware. The hardware must provide appropriate
mechanisms to ensure the correct operation of the computer
system and to prevent user programs from interfering with the
proper operation of the system.

Operating System – Definition:

 An operating system is a program that controls the


execution of application programs and acts as an interface
between the user of a computer and the computer hardware.
 A more common definition is that the operating system is
the one program running at all times on the computer
(usually called the kernel), with all else being application
programs.
 An operating system is concerned with the allocation of
resources and services, such as memory, processors,
devices, and information. The operating system
correspondingly includes programs to manage these
resources, such as a traffic controller, a scheduler, a memory
management module, I/O programs, and a file system.
5

Functions of Operating system – Operating system


performs four functions:

1. Convenience: An OS makes a computer more convenient


to use.
2. Efficiency: An OS allows the computer system resources to
be used efficiently.
3. Ability to Evolve: An OS should be constructed in such a
way as to permit the effective development, testing, and
introduction of new system functions at the same time
without interfering with service.
4. Throughput: An OS should be constructed so that It can
give maximum throughput(Number of tasks per unit time).

Major Functionalities of Operating System:

 Resource Management: When parallel accessing happens


in the OS means when multiple users are accessing the
system the OS works as Resource Manager, Its
responsibility is to provide hardware to the user. It decreases
the load in the system.
 Process Management: It includes various tasks
like scheduling, termination of the process. OS manages
various tasks at a time. Here CPU Scheduling happens
means all the tasks would be done by the many algorithms
that use for scheduling.
 Storage Management: The file system mechanism used
for the management of the
storage. NIFS, CFS, CIFS, NFS, etc. are some file
systems. All the data stores in various tracks of Hard disks
that all managed by the storage manager. It included Hard
Disk.
6

 Memory Management: Refers to the management of


primary memory. The operating system has to keep track,
how much memory has been used and by whom. It has to
decide which process needs memory space and how much.
OS also has to allocate and deallocate the memory space.
 Security/Privacy Management: Privacy is also provided
by the Operating system by means of passwords so that
unauthorized applications can’t access programs or data.
For example, Windows uses Kerberos authentication to
prevent unauthorized access to data.

The process operating system as User Interface:


1. User
2. System and application programs
3. Operating system
4. Hardware
Every general-purpose computer consists of the hardware,
operating system, system programs, and application programs.
The hardware consists of memory, CPU, ALU, and I/O
devices, peripheral devices, and storage devices. System
program consists of compilers, loaders, editors, OS, etc. The
application program consists of business programs, database
programs.
7

Fig1: Conceptual view of a computer system


Every computer must have an operating system to run other
programs. The operating system coordinates the use of the
hardware among the various system programs and application
programs for various users. It simply provides an environment
within which other programs can do useful work.
The operating system is a set of special programs that run on
a computer system that allows it to work properly. It
performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk,
sending output to the display screen, and controlling
peripheral devices.
OS is designed to serve two basic purposes:

1. It controls the allocation and use of the computing System’s


resources among the various user and tasks.
2. It provides an interface between the computer hardware and
the programmer that simplifies and makes it feasible for
coding, creation, debugging of application programs.
The Operating system must support the following tasks. The
tasks are:
8

1. Provides the facilities to create, modification of programs


and data files using an editor.
2. Access to the compiler for translating the user program from
high-level language to machine language.
3. Provide a loader program to move the compiled program
code to the computer’s memory for execution.
4. Provide routines that handle the details of I/O programming.

I/O System Management –


The module that keeps track of the status of devices is called
the I/O traffic controller. Each I/O device has a device
handler that resides in a separate process associated with that
device.
The I/O subsystem consists of

 A memory Management component that includes buffering


caching and spooling.
 A general device driver interface.

Drivers for specific hardware devices.


Assembler –

The input to an assembler is an assembly language program.


The output is an object program plus information that enables
the loader to prepare the object program for execution. At
one time, the computer programmer had at his disposal a
basic machine that interpreted, through hardware, certain
fundamental instructions. He would program this computer
by writing a series of ones and Zeros (Machine language),
place them into the memory of the machine.

Compiler –
9

The High-level languages- examples are FORTRAN,


COBOL, ALGOL, and PL/I are processed by compilers and
interpreters. A compiler is a program that accepts a source
program in a “high-level language “and produces a
corresponding object program. An interpreter is a program that
appears to execute a source program as if it was machine
language. The same name (FORTRAN, COBOL, etc.) is often
used to designate both a compiler and its associated language.

Loader –
A Loader is a routine that loads an object program and
prepares it for execution. There are various loading schemes:
absolute, relocating, and direct-linking. In general, the loader
must load, relocate and link the object program. The loader is
a program that places programs into memory and prepares
them for execution. In a simple loading scheme, the
assembler outputs the machine language translation of a
program on a secondary device and a loader places it in the
core. The loader places into memory the machine language
version of the user’s program and transfers control to it.
Since the loader program is much smaller than the assembler,
those make more core available to the user’s program.

History of Operating system –


The operating system has been evolving through the years.
The following table shows the history of OS.

Electronic device
Generation Year used Types of OS Device
First 1945-55 Vacuum Tubes Plug Boards
Second 1955-65 Transistors Batch Systems
Integrated
Third 1965-80 Circuits(IC) Multiprogramming
10

Since Large Scale


Fourth 1980 Integration PC

Types of Operating System –


 Batch Operating System- Sequence of jobs in a program on

a computer without manual interventions.


 Time-sharing operating System- allows many users to share

the computer resources. (Max utilization of the resources).


 Distributed operating System- Manages a group of different

computers and makes appear to be a single computer.


 Network operating system- computers running in different

operating systems can participate in a common network (It


is used for security purposes).
 Real-time operating system – meant applications to fix the

deadlines.
Examples of Operating System are –
 Windows (GUI based, PC)
 GNU/Linux (Personal, Workstations, ISP, File and print
server, Three-tier client/Server)
 macOS (Macintosh), used for Apple’s personal computers
and workstations (MacBook, iMac).
 Android (Google’s Operating System for
smartphones/tablets/smartwatches)
 iOS (Apple’s OS for iPhone, iPad, and iPod Touch)

Features of Operating System (OS)


Here is a list important features of OS:
 Protected and supervisor mode
 Allows disk access and file systems Device drivers
Networking Security
11

 Program Execution
 Memory management Virtual Memory Multitasking
 Handling I/O operations
 Manipulation of the file system
 Error Detection and handling
 Resource allocation
 Information and Resource Protection

Advantage of Operating System


 Allows you to hide details of hardware by creating an
abstraction
 Easy to use with a GUI
 Offers an environment in which a user may execute
programs/applications
 The operating system must make sure that the computer
system convenient to use
 Operating System acts as an intermediary among
applications and the hardware components
12

 It provides the computer system resources with easy to use


format
 Acts as an intermediator between all hardware’s and
software’s of the system

Disadvantages of Operating System


 If any issue occurs in OS, you may lose all the contents
which have been stored in your system
 Operating system’s software is quite expensive for small
size organization which adds burden on them. Example
Windows
 It is never entirely secure as a threat can occur at any time

What is Kernel in Operating System?


The kernel is the central component of a computer operating
systems. The only job performed by the kernel is to the manage
the communication between the software and the hardware. A
Kernel is at the nucleus of a computer. It makes the
communication between the hardware and software possible.
While the Kernel is the innermost part of an operating system,
a shell is the outermost one.
13

Introduction to Kernel

Features of Kernel
 Low-level scheduling of processes
 Inter-process communication
 Process synchronization
 Context switching

Types of Kernel
There are many types of kernels that exists, but among them,
the two most popular kernels are:
1. Monolithic
A monolithic kernel is a single code or block of the program. It
provides all the required services offered by the operating
14

system. It is a simplistic design which creates a distinct


communication layer between the hardware and software.
2. Microkernels
Microkernel manages all system resources. In this type of
kernel, services are implemented in different address space.
The user services are stored in user address space, and kernel
services are stored under kernel address space. So, it helps to
reduce the size of both the kernel and operating system.

Difference between Firmware and Operating System


Below are the Key Differences between Firmware and
Operating System:

Firmware Operating System


Define Firmware: Firmware is
Define Operating System:
one kind of programming that is
OS provides functionality
embedded on a chip in the device
over and above that which is
which controls that specific
provided by the firmware.
device.
Firmware is programs that been
OS is a program that can be
encoded by the manufacture of
installed by the user and can
the IC or something and cannot
be changed.
be changed.
It is stored on non-volatile OS is stored on the hard
memory. drive.
15

Difference between 32-Bit and 64-Bit Operating System

Parameters 32. Bit 64. Bit


Allow 32 bit of data Allow 64 bit of data
Architecture
processing processing
and Software
simultaneously simultaneously
32-bit applications 64-bit applications
Compatibility require 32-bit OS and require a 64-bit OS
CPUs. and CPU.
All versions of Windows Windows XP
Systems 8, Windows 7, Windows Professional, Vista,
Available Vista, and Windows XP, 7, Mac OS X and
Linux, etc. Linux.
32-bit systems are 64-bit systems allow
Memory
limited to 3.2 GB of a maximum 17
Limits
RAM. Billion GB of RAM.

Needs and Services of Operating Systems:

The goal of an Operating System: The fundamental goal of


an Operating System is to execute user programs and to make
tasks easier. Various application programs along with
hardware systems are used to perform this work. Operating
System is software that manages and controls the entire set of
resources and effectively utilizes every part of a computer.
The figure shows how OS acts as a medium between hardware
units and application programs.
16

Need for Operating System:


 OS as a platform for Application programs:
The operating system provides a platform, on top of which,
other programs, called application programs can run. These
application programs help the users to perform a specific
task easily. It acts as an interface between the computer and
the user. It is designed in such a manner that it operates,
controls, and executes various applications on the computer.

 Managing Input-Output unit:


Operating System also allows the computer to manage its
own resources such as memory, monitor, keyboard,
printer, etc. Management of these resources is required for
effective utilization. The operating system controls the
various system input-output resources and allocates them
to the users or programs as per their requirements.
 Consistent user interface:

Operating System provides the user with an easy-


to-work user interface, so the user doesn’t have to
learn a different UI every time and can focus on the
content and be productive as quickly as possible.
17

Operating System provides templates, and UI


components to make the working of a computer,
really easy for the user.

 Multitasking:
Operating System manages memory and allows multiple
programs to run in their own space and even communicate
with each other through shared memory. Multitasking gives
users a good experience as they can perform several tasks
on a computer at a time.

Operating system services:


The operating system provides the programming environment
in which a programmer works on a computer system. The user
program requests various resources through the operating
system. The operating system gives several services to utility
programmers and users. Applications access these services
through application programming interfaces or system calls. By
invoking those interfaces, the application can request a service
from the operating system, pass parameters, and acquire the
operation outcomes.
Following are the services provided by an operating system
-
o Program execution
o Control Input/output devices
o Program creation
o Error Detection and Response
o Accounting
o Security and Protection
o File Management
18

o Communication
Program execution
To execute a program, several tasks need to be performed. Both
the instructions and data must be loaded into the main memory.
In addition, input-output devices and files should be initialized,
and other resources must be prepared. The Operating structures
handle these kinds of tasks. The user now no longer should fear
the reminiscence allocation or multitasking or anything.
Control Input/output devices
As there are numerous types of I/O devices within the computer
system, and each I/O device calls for its own precise set of
instructions for the operation. The Operating System hides that
info with the aid of presenting a uniform interface. Thus, it is
convenient for programmers to access such devices easily.
Program Creation
The Operating system offers the structures and tools, including
editors and debuggers, to help the programmer create, modify,
and debugging programs.
Error Detection and Response
An Error in a device may also cause malfunctioning of the
entire device. These include hardware and software errors such
as device failure, memory error, division by zero, attempts to
access forbidden memory locations, etc. To avoid error, the
operating system monitors the system for detecting errors and
takes suitable action with at least impact on running
applications.
19

While working with computers, errors may occur quite often.


Errors may occur in the:
o Input/ Output devices: For example, connection failure
in the network, lack of paper in the printer, etc.
o User program: For example: attempt to access illegal
memory locations, divide by zero, use too much CPU
time, etc.
o Memory hardware: For example, Memory error, the
memory becomes full, etc.
To handle these errors and other types of possible errors, the
operating system takes appropriate action and generates
messages to ensure correct and consistent computing.
Accounting
An Operating device collects utilization records for numerous
assets and tracks the overall performance parameters and
responsive time to enhance overall performance. These
personal records are beneficial for additional upgrades and
tuning the device to enhance overall performance.
Security and Protection
Operating device affords safety to the statistics and packages
of a person and protects any interference from unauthorized
users. The safety feature counters threats, which are published
via way of individuals out of doors the manage of the running
device.
For Example:
When a user downloads something from the internet, that
program may contain malicious code that may harm the already
20

existing programs. The operating system ensures that proper


checks are applied while downloading such programs.
If one computer system is shared amongst a couple of users,
then the various processes must be protected from another
intrusion. For this, the operating system provides various
mechanisms that allow only those processes to use resources
that have gained proper authorization from the operating
system. The mechanism may include providing unique users
ids and passwords to each user.
File management
Computers keep data and information on secondary storage
devices like magnetic tape, magnetic disk, optical disk, etc.
Each storage media has its capabilities like speed, capacity,
data transfer rate, and data access methods.
For file management, the operating system must know the types
of different files and the characteristics of different storage
devices. It has to offer the proportion and safety mechanism of
documents additionally.
Communication
The operating system manages the exchange of data and
programs among different computers connected over a
network. This communication is accomplished using message
passing and shared memory.

Early Generations of Operating System:


The First Generation (1940 to early 1950s)
When the first electronic computer was developed in 1940, it
was created without any operating system. In early times, users
21

have full access to the computer machine and write a program


for each task in absolute machine language. The programmer
can perform and solve only simple mathematical calculations
during the computer generation, and this calculation does not
require an operating system.
The Second Generation (1955 - 1965)
The first operating system (OS) was created in the early 1950s
and was known as GMOS. General Motors has developed OS
for the IBM computer. The second-generation operating
system was based on a single stream batch processing system
because it collects all similar jobs in groups or batches and then
submits the jobs to the operating system using a punch card to
complete all jobs in a machine. At each completion of jobs
(either normally or abnormally), control transfer to the
operating system that is cleaned after completing one job and
then continues to read and initiates the next job in a punch card.
After that, new machines were called mainframes, which were
very big and used by professional operators.
The Third Generation (1965 - 1980)
During the late 1960s, operating system designers were very
capable of developing a new operating system that could
simultaneously perform multiple tasks in a single computer
program called multiprogramming. The introduction
of multiprogramming plays a very important role in
developing operating systems that allow a CPU to be busy
every time by performing different tasks on a computer at the
same time. During the third generation, there was a new
development of minicomputer's phenomenal growth starting in
1961 with the DEC PDP-1. These PDP's leads to the creation
of personal computers in the fourth generation.
22

The Fourth Generation (1980 - Present Day)


The fourth generation of operating systems is related to the
development of the personal computer. However, the personal
computer is very similar to the minicomputers that were
developed in the third generation. The cost of a personal
computer was very high at that time; there were small fractions
of minicomputers costs. A major factor related to creating
personal computers was the birth of Microsoft and the
Windows operating system. Microsoft created the
first window operating system in 1975. After introducing the
Microsoft Windows OS, Bill Gates and Paul Allen had the
vision to take personal computers to the next level. Therefore,
they introduced the MS-DOS in 1981; however, it was very
difficult for the person to understand its cryptic commands.
Today, Windows has become the most popular and most
commonly used operating system technology. And then,
Windows released various operating systems such as Windows
95, Windows 98, Windows XP and the latest operating system,
Windows 7. Currently, most Windows users use the Windows
10 operating system. Besides the Windows operating system,
Apple is another popular operating system built in the 1980s,
and this operating system was developed by Steve Jobs, a co-
founder of Apple. They named the operating system Macintosh
OS or Mac OS.

Structures - Simple Batch, Multi programmed,


timeshared:

Batch Operating System:


Batch processing was very popular in the 1970s. The jobs were
executed in batches. People used to have a single computer
23

known as a mainframe. Users using batch operating systems


do not interact directly with the computer. Each user prepares
their job using an offline device like a punch card and
submitting it to the computer operator. Jobs with similar
requirements are grouped and executed as a group to speed up
processing. Once the programmers have left their programs
with the operator, they sort the programs with similar needs into
batches.

The batch operating system grouped jobs that perform similar


functions. These job groups are treated as a batch and executed
simultaneously. A computer system with this operating system
performs the following batch processing activities:
1. A job is a single unit that consists of a preset sequence of
commands, data, and programs.
2. Processing takes place in the order in which they are
received, i.e., first come, first serve.
3. These jobs are stored in memory and executed without the
need for manual information.
4. When a job is successfully run, the operating system
releases its memory.
24

Types of Batch Operating System


There are mainly two types of the batch operating system.
These are as follows:
1. Simple Batched System
2. Multi-programmed batched system
Simple Batched System
The user did not directly interact with the computer system for
job execution in a simple batch operating system. However, the
user was required to prepare a job that included the program,
control information, and data on the nature of the job on control
cards. The job was then submitted to the computer operator,
who was usually in the form of a punch card. The program's
output included results and registers and memory dumps in the
event of a program error. The output appeared after some time
that could take days, hours, and minutes.
Its main role was to transfer control from one job to another.
Jobs with similar requirements were pooled together and
processed through the processor to improve processing speed.
The operators were used in the program to create batches with
similar needs. The computer runs the batches one by one when
they became available. This system typically reads a sequence
of jobs, each with its control cads and predefined job tasks.
Multi-programmed batched system
Spooling deals with many jobs that have already been read and
are waiting to run on disk. A disk containing a pool of jobs
allows the operating system to choose which job to run next to
maximize CPU utilization. Jobs that come on magnetic tape or
cards directly cannot be run in a different order. Jobs run
25

sequentially because they are executed in a first-come, first-


served manner. When various jobs are stored on a direct access
device, job scheduling becomes possible like a disk. Multi-
programming is an important feature of job scheduling. For
overlapped I/O, spooling and offline operations have their
limitations. Generally, a single user could not maintain all of
the input/output devices, and CPU buys at all times.
In the multi-programmed batched system, jobs are grouped so
that the CPU only executes one job at a time to improve CPU
utilization. The operating system maintains various jobs in
memory at a time. The operating system selects one job and
begins executing it in memory. Finally, the job must wait for a
task to complete, such as mounting a tape on an I/O operation.
In a multiprogramming system, do not sit idle because the
operating system switches to another task. When a job is in the
wait state, and the current job is completed, the CPU is
returned.
Why are Batch Operating Systems used?
Batch operating systems load less stress on the CPU and
include minimal user interaction, and that is why you can still
use them nowadays. Another benefit of batch operating
systems is that huge repetitive jobs may be done without
interacting with the computer to notify the system that you need
to perform after you finish that job.
Old batch operating systems weren't interactive, which means
that the user did not interact with the program while executing
it. Modern batch operating systems now support interactions.
For example, you may schedule the job, and when the specified
time arrives, the computer acknowledges the processor that the
time is up.
26

How does Batch Operating System work?


The operating system keeps the number of jobs in memory and
performs them one at a time. Jobs are processed in a first-come,
first-served manner. Each job set is defined as a batch. When a
task is finished, its memory is freed, and the work's output is
transferred into an output spool for later printing or processing.
User interaction is limited in the batch operating system. When
the system takes the task from the user, user is free. You may
also use the batch processing system to update data relating to
any transactions or records.
Role of Batch Operating System
A batch operating system's primary role is to execute jobs in
batches automatically. The main task of a batch processing
system is done by the 'Batch Monitor', which is located at the
low end of the main memory. This technique was made
possible by the development of hard disk drives and card
readers. The jobs can now be stored on a disk to form a pool of
jobs for batch execution. After that, they are grouped with
similar jobs being placed in the same batch. As a result, the
batch operating system automatically ran the batched jobs one
after the other, saving time by performing tasks only once. It
resulted from a better system due to reduced turnaround time.
Characteristics of Batch Operating System
There are various characteristics of the Batch Operating
System. Some of them are as follows:
1. In this case, the CPU executes the jobs in the same
sequence that they are sent to it by the operator, which
implies that the task sent to the CPU first will be executed
first. It's also known as the 'first come, first serve'
27

2. The word job refers to the command or instruction that the


user and the program should perform.
3. A batch operating system runs a set of user-supplied
instructions composed of distinct instructions and
programs with several similarities.
4. When a task is successfully executed, the OS releases the
memory space held by that job.
5. The user does not interface directly with the operating
system in a batch operating system; rather, all instructions
are sent to the operator.
6. The operator evaluates the user's instructions and creates
a set of instructions having similar properties.
Advantages and Disadvantages of Batch Operating System
There are various advantages and disadvantages of the Batch
Operating System. Some of them are as follows:
Advantages
There are various advantages of the Batch Operating System.
Some of them are as follows:
1. It isn't easy to forecast how long it will take to complete a
job; only batch system processors know how long it will
take to finish the job in line.
2. This system can easily manage large jobs again and again.
3. The batch process can be divided into several stages to
increase processing speed.
4. When a process is finished, the next job from the job spool
is run without any user interaction.
5. CPU utilization gets improved.
28

Disadvantages
There are various disadvantages of the Batch Operating
System. Some of them are as follows:
1. When a job fails once, it must be scheduled to be
completed, and it may take a long time to complete the
task.
2. Computer operators must have full knowledge of batch
systems.
3. The batch system is quite difficult to debug.
4. The computer system and the user have no direct
interaction.
5. If a job enters an infinite loop, other jobs must wait for an
unknown period of time.
Time-Sharing Operating System
Time-sharing is a technique that enables many people located
at various terminals to use a particular computer system
simultaneously. Time-Sharing is the logical extension of
multiprogramming. In this time-sharing Operating system,
many processes are allocated with computer resources in
respective time slots. In this, the processor's time is shared with
multiple users. That's why it is called a time-sharing operating
system. It has a fixed time slice for the different processes. Its
main purpose is interactive response time.
29

The CPU executes multiple jobs by switching between them,


but the switches occur so frequently. Thus, the user can receive
an immediate response. The operating system uses CPU
scheduling and multiprogramming to provide each user with a
small amount of time. Computer systems that were designed
primarily as batch systems have been modified to time-sharing
systems.
The main difference between Multi programmed Batch
Systems, and Time-Sharing Systems is that in
multiprogrammed batch systems, the objective is to maximize
processor use. In contrast, in Time-Sharing Systems, the
objective is to minimize response time.
Features of Time-Sharing OS
Time-sharing OS provides the following features for users:
o Each user grabs dedicated time for all operations.
o Multiple online users can use the same computer at the
same time.
30

o End-users feel that they monopolize the computer system.


o Better interaction between users and computers.
o User requests can make in small-time responses.
o It does not need longer have to wait for the last task to end
to get processor.
o It can make quick processing with a lot of tasks.
Advantages of Time-Sharing OS
The time-sharing operating system has the following
advantages:
o It provides a quick response
o Reduces CPU idle time
o All the tasks are given a specific time
o Less probability of duplication of software
o Improves response time
o Easy to use and user friendly
Disadvantages of Time-Sharing OS
Below are some disadvantages of the time-sharing operating
system, such as:
o It consumes many resources
o Requires high specification of hardware
o It has a problem of reliability
o An issue with the security and integrity of user programs
and data
o Probability of data communication problem
31

Difference between Multiprogramming and Time-Sharing


System
In multi-programming, more than one process can reside in the
main memory at a time. Thus, when one process goes for I/O
operation, the CPU is not waiting and allocated to another
process. This keeps the CPU busy at all times.
The concept of time-sharing overcomes the problem of no user
interaction. A Time-sharing system requires that each user be
provided with an input device (keyboard or mouse) and an
output device (monitor) to interact with the system. In time-
sharing, multiple jobs are executed simultaneously, and the
CPU switches among them frequently so that each user can
interact with each program while it is running. It decreases the
system's response time for each user process and gives each
user the illusion that the CPU is working slowly.
The main difference between multiprogramming and time-
sharing is that multiprogramming effectively utilizes CPU time
by allowing several programs to use the CPU simultaneously.
But time-sharing is sharing a computing facility by several
users who want to use the same facility simultaneously. Each
user on a time-sharing system gets their own terminal and feels
like using the CPU alone. Time-sharing systems use the
concept of multiprogramming to share the CPU time between
multiple users at the same time. Below are some more
differences between multiprogramming system and time-
sharing system, such as:

Multiprogramming System Time-Sharing System


32

Multiprogramming operating Time-Sharing is the logical


system allows executing extension of
multiple processes by multiprogramming. In this
monitoring their process states time-sharing Operating system,
and switching between many users/processes are
processes. allocated with computer
resources in respective time
slots.

The processor and memory The processor's time is shared


underutilization problem is with multiple users. That's why
resolved, and multiple it is called a time-sharing
programs run on the CPU. operating system.
That's why it is called
multiprogramming.

In multiprogramming, the In this process, two or more


process can be executed by a users can use a processor in
single processor. their terminal.

Multiprogramming OS has no Time-sharing OS has a fixed


fixed time slice. time slice.

In a multiprogramming OS In a time-sharing OS system,


system, before finishing a task, executive power is taken off
the executive power is not before finishing execution.
taken off.
33

Here the system does not take Here the system works for the
some time to work on different same or less time on each
processes. process.

In Multiprogramming OS, the In time-sharing, the OS system


system depends on devices to depends on time to switch
switch between tasks such as between different processes.
I/O interrupts.

The system model of a A system model of the time-


multiprogramming system is sharing system is multiple
multiple programs. programs and multiple users.

Multiprogramming system The Time-sharing system


maximizes response time. maximizes response time.

Example: Mac OS, Window Example: Windows NT server,


OS, microcomputers such as Unix, Linux, Multics, TOPS-
MP/M, XENIX, and ESQview 10, TOPS-20

Multiprogramming Operating System:


A multiprogramming operating system may run many
programs on a single processor computer. If one program must
wait for an input/output transfer in a multiprogramming
operating system, the other programs are ready to use the CPU.
As a result, various jobs may share CPU time. However, the
execution of their jobs is not defined to be at the same time
period.
34

When a program is being performed, it is known as a "Task",


"Process", and "Job". Concurrent program executions
improve system resource consumption and throughput as
compared to serial and batch processing systems.
The primary goal of multiprogramming is to manage the entire
system's resources. The key components of a
multiprogramming system are the file system, command
processor, transient area, and I/O control system. As a result,
multiprogramming operating systems are designed to store
different programs based on sub-segmenting parts of the
transient area. The resource management routines are linked
with the operating system core functions.
Types of the Multiprogramming Operating System
There are mainly two types of multiprogramming operating
systems. These are as follows:
1. Multitasking Operating System
2. Multiuser Operating System
Multitasking Operating System
A multitasking operating system enables the execution of two
or more programs at the same time. The operating system
accomplishes this by shifting each program into and out of
memory one at a time. When a program is switched out of
memory, it is temporarily saved on disk until it is required
again.
Multiuser Operating System
A multiuser operating system allows many users to share
processing time on a powerful central computer from different
terminals. The operating system accomplishes this by rapidly
35

switching between terminals, each of which receives a limited


amount of processor time on the central computer. The
operating system changes among terminals so quickly that each
user seems to have continuous access to the central computer.
If there are many users on a system like this, the time it takes
the central computer to reply can become more obvious.
Working of the Multiprogramming Operating System
Multiple users can accomplish their jobs simultaneously in the
multiprogramming system, and it can be stored in the main
memory. When one program is engaged in I/O operations, the
CPU may deliver time to various programs while sitting in idle
mode.
When one application is waiting for an I/O transfer, another is
ready to use the processor at all times, and numerous programs
may share CPU time. All jobs are not run simultaneously, but
there could be numerous jobs running on the processor at the
same time, and parts of other processes being executed first,
then another segment, etc. As a result, the overall goal of a
multiprogramming system is to keep the CPU busy until some
tasks are available in the job pool. Thus, the numerous
programs can run on a single processor computer, and the CPU
is never idle.
Examples of Multiprogramming Operating System
There are various examples of multiprogramming operating
systems, including download apps, transfer data, MS-
Excel, Google Chrome, Firefox browser, and many more apps.
Other examples are Windows O/S, UNIX O/S,
Microcomputers such as XENIX, MP/M, and ESQview.
36

Advantages and Disadvantages of Multiprogramming


Operating System
There are various advantages and disadvantages of the
multiprogramming operating system. Some of the advantages
and disadvantages are as follows:
Advantages
There are various advantages of the multiprogramming
operating system. Some of the advantages are as follows:
1. It provides less response time.
2. It may help to run various jobs in a single application
simultaneously.
3. It helps to optimize the total job throughput of the
computer.
4. Various users may use the multiprogramming system at
once.
5. Short-time jobs are done quickly in comparison to long-
time jobs.
6. It may help to improve turnaround time for short-time
tasks.
7. It helps in improving CPU utilization and never gets idle.
8. The resources are utilized smartly.
Disadvantages
There are various disadvantages of the multiprogramming
operating system. Some of the disadvantages are as follows:
1. It is highly complicated and sophisticated.
2. The CPU scheduling is required.
37

3. Memory management is needed in the operating system


because all types of tasks are stored in the main memory.
4. The harder task is to handle all processes and tasks.
5. If it has a large number of jobs, then long-term jobs will
require a long wait.

Personal Computer:

What is a PC?

The PC is a general-purpose, cost-effective computer that


stands for the personal computer. It is When Ed Roberts
introduced the MITS Altair 8800, he coined the term PC.
Alternatively, it is referred to as a single-user computer and a
desktop that is designed to be used by a single end-user. All
PCs depend on the microprocessor technology that makes it
capable for PC makers to set the whole CPU (central
processing unit) on a single chip.
Generally, a PC contains a keyboard, mouse, monitor, and
system unit. Most of the personal computers have an Internet
or a network connection, including ports for connecting
peripheral devices like external hard drives, scanners, printers,
digital cameras, and more. Even if the term PC can refer to any
personal computer, including an Apple Macintosh computer,
38

but it is commonly used in the computer industry to describe


an IBM or IBM-compatible computer.
Personal computers are used to create spreadsheets, write
papers, play games, track our finances, account, run databases,
and many other tasks. Also, at home, it is widely used for
playing PC games, multimedia entertainment, accessing the
Internet and more. If your PC is connected to the Internet, you
can use it for communicating with friends via instant messaging
programs, browsing the Web, checking e-mail, and
downloading data or files. It is normal for a personal computer
to create a network by connecting more than one PC together,
even though they are designed to use as single-user systems.
A personal computer can be a laptop computer, a
microcomputer, a handheld PC, a desktop computer, or a tablet
PC. Additionally, without PCs, life can be difficult as it has
become such an integral part of our lives.
Usually, PCs have various parts, which are as follows:
o Computer case
o Motherboard
o Power supply
o CD/DVD drives/writers
o Random-access memory (RAM)
o Hard disk(s)
o Numerous external devices, such as a keyboard, printer,
visual display and pointing device
Computers were only affordable by large enterprises and
universities and occupied whole rooms in the mid-
1960s and 1970s. Multiple users accessed these computers
through the attached terminals. In the early 1980s, the term PC
39

become popular and build as the Man of the Year on Time


Magazine's choice of PC for 1982. Technology had advanced
far enough by the late 1980s that a small computer owned by a
single individual and could be used to perform a specific task.
IBM termed as the IBM PC, introduced its first personal
computer in 1981. It became more popular speedily in the
market.
IBM Clones
Later, by developing IBM clones, other manufacturers adapted
to the PC trend advanced by IBM. Clones were available at a
lower price and internally almost much like as the IBM PC.
IBM clones were capable of running the same software that
IBM PCs could run, as they used the same microprocessors as
IBM PCs. Over the years, IBM lost its dominance in the field
of PC. Many of its innovations, like the OS/2 operating system,
MCA expansion bus, have not been run longer in the market or
industry. Currently, PCs are primarily classified between Apple
Macintoshes and PCs from other manufacturers as of 2011.
Benefits of Personal Computers
Personal computers play an important role for individuals in
both personal and professional ways. From preschoolers to
senior citizens, personal computers have advantages for
everyone. The technology of the personal computer provides
you various benefits by saving your money, time, and many
other ways.
o Education: A personal computer can be used for
education purposes at school or college, work and home.
In school, a personal computer can be used to make an
attendance sheet for students. Also, a teacher can use the
personal computer to learn about subjects if he does not
40

have expertise in the subject in which he is going to


discuss a classroom. It may be used to take quizzes in the
classroom and learn about other cultures with the help of
software programs.
A personal computer can be used to learn new skills to
improve your performance and new processes in the
workplace. At home, it can be used by students or others
to learn subjects such as math, spelling, and reading by
using a software program on the computer.
o Entertainment: A personal computer is also used for
entertainment. On the computer, you can play games
with other people through software programs, even
though they are thousands of miles away. Additionally, if
anyone does not want to play a game with someone, it
offers users options to use single-player games on the
personal computer as well.
It allows you to watch movies and television shows any
time, but you are required to download them. You can
also watch the videos online by connecting your personal
computer to the Internet or a network. In this way, you
can manage your schedule accordingly, rather than let
the television networks control it for you.
o Communication: A personal computer offers multiple
options for communication with others. As you can use
your computer to make a call over the Internet, and do
not need a separate telephone. A personal computer is
not only used for work but can also be used to share and
synchronize schedules for personal life as well. It can be
used by employees to work with other employees as a
team on opposite work schedules with the help of sharing
documents and leaving notes about the project.
Furthermore, it helps to reduce society's reliance on the
phone by communicating over e-mail and instant
41

messaging on computers.
Communication is also possible between humans and
machines as well as between humans over a computer.
For example, you can turn off your lights while you are
outside your house. Also, you can set your DVR to
record a movie if it is connected to your computer via a
wireless Internet connection.
o Information: A personal computer helps people to have
information constantly at their fingertips by connecting
the Internet. On the computer, a user can type a desirable
website URL into the search bar of the browser and visit
this site in just a few seconds. There are several user-
submitted communities, books, and encyclopedias,
available online that are designed to offer specific types of
information, such as walkthroughs, video game.
Additionally, you can use offline play education games or
digital encyclopedia software.
Disadvantages of Personal Computers
o Physical Side Effects: The use of a computer frequently

can lead to a variety of physical problems like backaches,


tight hamstrings, wrist soreness and tension headaches.
Laptop users have faced problems as they need to bend
their back in order to appropriately view the screen. It can
also be harmful to people who use a computer all day, as
they can prone to tight muscles, strained eyes and carpal
tunnel syndrome. Furthermore, with a personal computer,
people are required to sit all day that can be a reason for
more thoughtful health conditions.
o Internet Addiction: A person who has a deep addiction

to the Internet may include addiction to online gaming,


cyber-relationships, online gambling, etc. Internet
addiction people may use the Internet to avoid from
42

loneliness, stress, or depression in their daily life and get


anxiety when they are offline. Although the use of the
internet within a limit is healthy and normal, people who
have an addiction to the Internet always feel as must
constantly be them on the Internet. This often affects work
and decrease school performance.

Parallel Operating System:

Parallel operating systems are a type of computer


processing platform that breaks large tasks into smaller pieces
that are done at the same time in different places and by
different mechanisms. They are sometimes also described as
“multi-core” processors. This type of system is usually very
efficient at handling very large files and complex numerical
codes. It’s most commonly seen in research settings where
central server systems are handling a lot of different jobs at
once, but can be useful any time multiple computers are doing
similar jobs and connecting to shared infrastructures
simultaneously. They can be difficult to set up at first and can
require a bit of expertise, but most technology experts agree
that, over the long term, they’re much more cost effective and
efficient than their single-computer counterparts.

Distributed Systems:

Distributed System is a collection of autonomous computer


systems that are physically separated but are connected by a
centralized computer network that is equipped with distributed
system software. The autonomous computers will
communicate among each system by sharing resources and
files and performing the tasks assigned to them.
Example of Distributed System:
43

Any Social Media can have its Centralized Computer Network


as its Headquarters and computer systems that can be accessed
by any user and using their services will be the Autonomous
Systems in the Distributed System Architecture.

 Distributed System Software: This Software enables


computers to coordinate their activities and to share the
resources such as Hardware, Software, Data, etc.
 Database: It is used to store the processed data that are
processed by each Node/System of the Distributed systems
that are connected to the Centralized network.
44

 As we can see that each Autonomous System has a common


Application that can have its own data that is shared by the
Centralized Database System.
 To Transfer the Data to Autonomous Systems, Centralized
System should be having a Middleware Service and should
be connected to a Network.
 Middleware Services enables some services which are not
present in the local systems or centralized system default by
acting as an interface between the Centralized System and
the local systems. By using components of Middleware
Services systems communicate and manage data.
 The Data which is been transferred through the database
will be divided into segments or modules and shared with
Autonomous systems for processing.
45

 The Data will be processed and then will be transferred to


the Centralized system through the network and will be
stored in the database.

Characteristics of Distributed System:

 Resource Sharing: It is the ability to use any Hardware,


Software, or Data anywhere in the System.
 Openness: It is concerned with Extensions and
improvements in the system (i.e., How openly the software
is developed and shared with others)
 Concurrency: It is naturally present in the Distributed

Systems, that deal with the same activity or functionality


that can be performed by separate users who are in remote
locations. Every local system has its independent Operating
Systems and Resources.
 Scalability: It increases the scale of the system as a number

of processors communicate with more users by


accommodating to improve the responsiveness of the
system.
 Fault tolerance: It cares about the reliability of the system

if there is a failure in Hardware or Software, the system


continues to operate properly without degrading the
performance the system.
 Transparency: It hides the complexity of the Distributed

Systems to the Users and Application programs as there


should be privacy in every system.
Advantages of Distributed System:
 Applications in Distributed Systems are Inherently
Distributed Applications.
 Information in Distributed Systems is shared among

geographically distributed users.


46

 Resource Sharing (Autonomous systems can share


resources from remote locations).
 It has a better price performance ratio and flexibility.
 It has shorter response time and higher throughput.
 It has higher reliability and availability against component
failure.
 It has extensibility so that systems can be extended in more
remote locations and also incremental growth.

Disadvantages of Distributed System:


 Relevant Software for Distributed systems does not exist

currently.
 Security possess a problem due to easy access to data as the

resources are shared to multiple systems.


 Networking Saturation may cause a hurdle in data transfer

i.e., if there is a lag in the network then the user will face a
problem accessing data.

Applications Area of Distributed System:


 Finance and Commerce: Amazon, eBay, Online Banking,

E-Commerce websites.
 Information Society: Search Engines, Wikipedia, Social

Networking, Cloud Computing.


 Cloud Technologies: AWS, Salesforce, Microsoft Azure,

SAP.
 Entertainment: Online Gaming, Music, youtube.

 Healthcare: Online patient records, Health Informatics.

 Education: E-learning.

 Transport and logistics: GPS, Google Maps.

 Environment Management: Sensor technologies.


47

Difference Between Distributed System and Parallel


System:

S.
No Parallel System Distributed System

Parallel systems are the


systems that can process
the data simultaneously, In these systems,
and increase the applications are running on
computational speed of a multiple computers linked
1. computer system. by communication lines.

The distributed system


consists of a number of
computers that are
Parallel systems work connected and managed so
with the simultaneous that they share the job
use of multiple computer processing load among
resources which can various computers
include a single computer distributed over the
2. with multiple processors. network.

Tasks are performed with Tasks are performed with a


3. a more speedy process. less speedy process.

In Distributed Systems,
These systems are each processor has its own
4. multiprocessor systems. memory.
48

S.
No Parallel System Distributed System

Distributed systems are also


It is also known as a known as loosely coupled
5. tightly coupled system. systems.

These systems
communicate with one
another through various
These systems have close communication lines, such
communication with as high-speed buses or
6. more than one processor. telephone lines.

These systems share a These systems do not share


memory, clock, and memory or clock in contrast
7. peripheral devices to parallel systems.

In this, all processors In this there is no global


share a single master clock in distributed
clock for computing, it uses various
8. synchronization. synchronization algorithms.

Real-Time Systems:

A real-time operating system (RTOS) is a special-purpose


operating system used in computers that has strict time
49

constraints for any job to be performed. It is employed mostly


in those systems in which the results of the computations are
used to influence a process while it is executing. Whenever an
event external to the computer occurs, it is communicated to
the computer with the help of some sensor used to monitor the
event. The sensor produces the signal that is interpreted by the
operating system as an interrupt. On receiving an interrupt, the
operating system invokes a specific process or a set of
processes to serve the interrupt.

This process is completely uninterrupted unless a higher


priority interrupt occurs during its execution. Therefore, there
must be a strict hierarchy of priority among the interrupts. The
interrupt with the highest priority must be allowed to initiate
the process , while lower priority interrupts should be kept in a
buffer that will be handled later. Interrupt management is
important in such an operating system.
Real-time operating systems employ special-purpose operating
systems because conventional operating systems do not
provide such performance.
The various examples of Real-time operating systems are:
50

o MTS
o Lynx
o QNX
o VxWorks etc.
Applications of Real-time operating system (RTOS):
RTOS is used in real-time applications that must work within
specific deadlines. Following are the common areas of
applications of Real-time operating systems are given below.
o Real-time running structures are used inside the Radar
gadget.
o Real-time running structures are utilized in Missile
guidance.
o Real-time running structures are utilized in on line
inventory trading.
o Real-time running structures are used inside the cell phone
switching gadget.
o Real-time running structures are utilized by Air site
visitors to manipulate structures.
o Real-time running structures are used in Medical Imaging
Systems.
o Real-time running structures are used inside the Fuel
injection gadget.
o Real-time running structures are used inside the Traffic
manipulate gadget.
o Real-time running structures are utilized in Autopilot
travel simulators.
Types of Real-time operating system
Following are the three types of RTOS systems are:
51

Hard Real-Time operating system:


In Hard RTOS, all critical tasks must be completed within the
specified time duration, i.e., within the given deadline. Not
meeting the deadline would result in critical failures such as
damage to equipment or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along
with a handle in the driver's seat. When the driver applies
brakes at a particular instance, the airbags grow and prevent the
driver's head from hitting the handle. Had there been some
delay even of milliseconds, then it would have resulted in an
accident.
Similarly, consider an on-stock trading software. If someone
wants to sell a particular share, the system must ensure that
command is performed within a given critical time. Otherwise,
52

if the market falls abruptly, it may cause a huge loss to the


trader.
Soft Real-Time operating system:
Soft RTOS accepts a few delays via the means of the Operating
system. In this kind of RTOS, there may be a closing date
assigned for a particular job, but a delay for a small amount of
time is acceptable. So, cut off dates are treated softly via means
of this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and
Livestock price quotation Systems.
Firm Real-Time operating system:
In Firm RTOS additionally want to observe the deadlines.
However, lacking a closing date might not have a massive
effect, however may want to purposely undesired effects, like
a massive discount within the fine of a product.
For Example, this system is used in various forms of
Multimedia applications.
Advantages of Real-time operating system:
The benefits of real-time operating system are as follows-:
o Easy to layout, develop and execute real-time applications
under the real-time operating system.
o The real-time working structures are extra compact, so
those structures require much less memory space.
o In a Real-time operating system, the maximum utilization
of devices and systems.
53

o Focus on running applications and less importance to


applications that are in the queue.
o Since the size of programs is small, RTOS can also be
embedded systems like in transport and others.
o These types of systems are error-free.
o Memory allocation is best managed in these types of
systems.
Disadvantages of Real-time operating system:
The disadvantages of real-time operating systems are as
follows-
o Real-time operating systems have complicated layout
principles and are very costly to develop.
o Real-time operating systems are very complex and can
consume critical CPU cycles.

CHAPTER-2

PROCESS MANAGEMENT

Process concept:
54

A Program does nothing unless its instructions are executed by


a CPU. A program in execution is called a process. In order to
accomplish its task, process needs the computer resources.
There may exist more than one process in the system which
may require the same resource at the same time. Therefore, the
operating system has to manage all the processes and the
resources in a convenient and efficient way.
Some resources may need to be executed by one process at one
time to maintain the consistency otherwise the system can
become inconsistent and deadlock may occur.
The operating system is responsible for the following activities
in connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.

Operation on processes:

The execution of a process is a complex activity. It


involves various operations. Following are the
operations that are performed while execution of a
process:
55

1. Creation: This the initial step of process execution


activity. Process creation means the construction of a
new process for the execution. This might be performed
by system, user or old process itself. There are several
events that leads to the process creation. Some of the
such events are following:
 When we start the computer, system creates several

background processes.
 A user may request to create a new process.

 A process can create a new process itself while

executing.
 Batch system takes initiation of a batch job.

2. Scheduling/Dispatching: The event or activity in


which the state of the process is changed from ready to
running. It means the operating system puts the process
from ready state into the running state. Dispatching is
done by operating system when the resources are free or
the process has higher priority than the ongoing process.
56

There are various other cases in which the process in


running state is preempted and process in ready state is
dispatched by the operating system.
3. Blocking: When a process invokes an input-output
system call that blocks the process and operating system
put in block mode. Block mode is basically a mode
where process waits for input-output. Hence on the
demand of process itself, operating system blocks the
process and dispatches another process to the processor.
Hence, in process blocking operation, the operating
system puts the process in ‘waiting’ state.
4. Preemption: When a timeout occurs that means the
process hadn’t been terminated in the allotted time
interval and next process is ready to execute, then the
operating system preempts the process. This operation
is only valid where CPU scheduling supports
preemption. Basically this happens in priority
scheduling where on the incoming of high priority
process the ongoing process is preempted. Hence, in
process preemption operation, the operating system puts
the process in ‘ready’ state.
5. Termination: Process termination is the activity of
ending the process. In other words, process termination
is the relaxation of computer resources taken by the
process for the execution. Like creation, in termination
also there may be several events that may lead to the
process termination. Some of them are:
 Process completes its execution fully and it indicates

to the OS that it has finished.


57

 Operating system itself terminates the process due to


service errors.
 There may be problem in hardware that terminates the
process.
 One process can be terminated by another process.

Cooperating Processes:

There are various processes in a computer system, which can


be either independent or cooperating processes that operate in
the operating system. It is considered independent when any
other processes operating on the system may not impact a
process. Process-independent processes don't share any data
with other processes. On the other way, a collaborating process
may be affected by any other process executing on the system.
A cooperating process shares data with another.
Advantages of Cooperating Process in Operating System
There are various advantages of cooperating process in the
operating system. Some advantages of the cooperating system
are as follows:
1. Information Sharing
Cooperating processes can be used to share information
between various processes. It could involve having access to
the same files. A technique is necessary so that the processes
may access the files concurrently.
2. Modularity
Modularity refers to the division of complex tasks into smaller
subtasks. Different cooperating processes can complete these
58

smaller subtasks. As a result, the required tasks are completed


more quickly and efficiently.
3. Computation Speedup
Cooperating processes can be used to accomplish subtasks of a
single task simultaneously. It improves computation speed by
allowing the task to be accomplished faster. Although, it is only
possible if the system contains several processing elements.
4. Convenience
There are multiple tasks that a user requires to perform, such as
printing, compiling, editing, etc. It is more convenient if these
activities may be managed through cooperating processes.
Concurrent execution of cooperating processes needs systems
that enable processes to communicate and synchronize their
actions.
Methods of Cooperating Process
Cooperating processes may coordinate with each other by
sharing data or messages. The methods are given below:
1. Cooperation by sharing
The processes may cooperate by sharing data, including
variables, memory, databases, etc. The critical section provides
data integrity, and writing is mutually exclusive to avoid
inconsistent data.
59

Here, you see a diagram that shows cooperation by sharing. In


this diagram, Process P1 and P2 may cooperate by using shared
data like files, databases, variables, memory, etc.
2. Cooperation by Communication
The cooperating processes may cooperate by using messages.
If every process waits for a message from another process to
execute a task, it may cause a deadlock. If a process does not
receive any messages, it may cause starvation.
60

Here, you have seen a diagram that shows cooperation by


communication. In this diagram, Process P1 and P2 may
cooperate by using messages to communicate.
Example: Producer-Consumer Problem
Let's take an example of two cooperating processes. It is
referred to as the Producer-Consumer Problem, and it involves
two processes: the producer and the consumer.
Producer Process
It generates information that the consumer would consume.
Consumer Process
It consumes the information that the producer produces.
Both processes run simultaneously. The customer waits if there
is nothing to consume.
61

There is a producer and a consumer; the producer creates the


item and stores it in a buffer while the consumer consumes it.
For example, print software generates characters that the
printer driver consumes. A compiler can generate assembly
code, which an assembler can use. In addition, the assembler
may produce object modules that are used by the loader.
Producer Process
1. while(true)
2. {
3. produce an item &
4. while(counter = = buffer-size);
5. buffer[int] = next produced;
6. in = (in+1) % buffer- size;
7. counter ++;
8. }
Consumer Process
1. While(true)
2. {
3. while (counter = = 0);
4. next consumed = buffer[out];
5. out= (out+1) % buffer size;
6. counter--;
7. }
Where,
o The producer uses the in variable to determine the next
empty slot in the buffer.
o The out variable is used by the consumer to determine
where the item is located.
62

o The counter is used by producers and consumers to


determine the number of filled slots in the buffer.
Shared Resources
There are two shared resources:
1. Buffer
2. Counter
Inconsistency occurs when the producer and consumer are not
executed on time. If both the producer and the consumer
execute concurrently without any control, the value of a counter
used by both will be incorrect. These processes share the
following variables:
1. var n;
2. type item = .....;
3. var Buffer : array [0,n-1] of item;
4. In, out:0..n-1;
The variables in and out are both set to 0 by default. The shared
buffer contains two logical pointers, in and out, which are
implemented as a circular array. The In variables point to the
buffer's next free position, while the Out variables point to the
buffer's first full position. The buffer is empty when in = out,
and it is filled when in+1 mod n = out.

Threads:

A thread is a single sequential flow of execution of tasks of a


process so it is also known as thread of execution or thread of
control. There is a way of thread execution inside the process
63

of any operating system. Apart from this, there can be more


than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of
activation records and control blocks. Thread is often referred
to as a lightweight process.

The process can be split down into so many threads. For


example, in a browser, many tabs can be viewed as threads.
MS Word uses many threads - formatting text from one thread,
processing input from another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing
process than to create a new process.
o Threads can share the common data, they do not need to
use Inter- Process communication.
o Context switching is faster when working with threads.
64

o It takes less time to terminate a thread than a process.


Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.
User-level thread
The operating system does not recognize the user-level thread.
User threads can be easily implemented and it is implemented
by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread
does not know nothing about the user level thread. The kernel-
level thread manages user-level threads as if they are single-
threaded processes?examples: Java thread, POSIX threads, etc.
Advantages of User-level threads
1. The user threads can be easily implemented than the
kernel thread.
2. User-level threads can be applied to such types of
operating systems that do not support threads at the kernel-
level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level
threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The
register, PC, stack, and mini thread control blocks are
stored in the address space of the user-level process.
65

7. It is simple to create, switch, and synchronize threads


without the intervention of the process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread
and the kernel.
2. If a thread causes a page fault, the entire process is
blocked.

Kernel level thread


The kernel thread recognizes the operating system. There is a
thread control block and process control block in the system for
each thread and process in the kernel-level thread. The kernel-
level thread is implemented by the operating system. The
kernel knows about all the threads and manages them. The
kernel-level thread offers a system call to create and manage
the threads from user-space. The implementation of kernel
threads is more difficult than the user thread. Context switch
time is longer in the kernel thread. If a kernel thread performs
a blocking operation, the Banky thread execution can continue.
Example: Window Solaris.
66

Advantages of Kernel-level threads


1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the
process of threads being large numerical.
3. The kernel-level thread is good for those applications that
block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the
user thread.
3. The kernel-level thread is slower than user-level threads.
Components of Threads
67

Any thread has the following components.


1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process
is split into many threads, and each thread is treated as a
job, the number of jobs done in the unit time increases.
That is why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When
you have more than one thread in one process, you can
schedule more than one thread in more than one processor.
o Faster context switch: The context switching period
between threads is less than the process context switching.
The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several
threads, and when a thread completes its execution, that
process can be responded to as soon as possible.
o Communication: Multiple-thread communication is
simple because the threads share the same address space,
while in process, we adopt just a few exclusive
communication strategies for communication between
two processes.
o Resource sharing: Resources can be shared between all
threads within a process, such as code, data, and files.
Note: The stack and register cannot be shared between
threads. There is a stack and register for each thread.
68

Interprocess Communication:

In general, Inter Process Communication is a type of


mechanism usually provided by the operating system (or OS).
The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the
intercommunication allows a process letting another process
know that some event has occurred.
Let us now look at the general definition of inter-process
communication, which will explain the same thing that we have
discussed above.
Definition
"Inter-process communication is used for exchanging useful
information between numerous threads in one or more
processes (or programs)."
To understand inter process communication, you can consider
the following given diagram that illustrates the importance of
inter-process communication:
69

Role of Synchronization in Inter Process Communication


It is one of the essential parts of inter process communication.
Typically, this is provided by interprocess communication
control mechanisms, but sometimes it can also be controlled by
communication processes.
These are the following methods that used to provide the
synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter
the critical section at a time. This also helps in synchronization
and creates a stable state to avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access
to the shared resources by several processes. Semaphore is
further divided into two types which are as follows:
1. Binary Semaphore
2. Counting Semaphore
Barrier:-
A barrier typically not allows an individual process to proceed
unless all the processes does not reach it. It is used by many
parallel languages, and collective routines impose barriers.
70

Spinlock:-
Spinlock is a type of lock as its name implies. The processes
are trying to acquire the spinlock waits or stays in a loop while
checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process
does not perform any functional operation (or task).
Approaches to Interprocess Communication
We will now discuss some different approaches to inter-process
communication which are as follows:

These are a few different approaches for Inter- Process


Communication:
1. Pipes
2. Shared Memory
3. Message Queue
71

4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
To understand them in more detail, we will discuss each of
them individually.
Pipe:-
The pipe is a type of data channel that is unidirectional in
nature. It means that the data in this type of data channel can be
moved in only a single direction at a time. Still, one can use
two-channel of this type, so that he can able to send and receive
data in two processes. Typically, it uses the standard methods
for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems
as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or
accessed by multiple processes simultaneously. It is primarily
used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and
Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and
write the data to the message queue. In the message queue, the
messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue
is very helpful in inter-process communication and used by all
operating systems.
72

To understand the concept of Message queue and Shared


memory in more detail, let's take a look at its diagram given
below:

Message Passing:-
It is a type of mechanism that allows processes to synchronize
and communicate with each other. However, by using the
message passing, the processes can communicate with each
other without restoring the hared variables.
Usually, the inter-process communication mechanism provides
two operations that are as follows:
o send (message)
o received (message)
Note: The size of the message can be fixed or variable.

Direct Communication:-
In this type of communication process, usually, a link is created
or established between two communicating processes.
However, in every pair of communicating processes, only one
link can exist.
73

Indirect Communication
Indirect communication can only exist or be established when
processes share a common mailbox, and each pair of these
processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated
processes. It can also be considered as full-duplex, which
means that one process can communicate with another process
and vice versa.
Some other different approaches
o Socket:-

It acts as a type of endpoint for receiving or sending the data in


a network. It is correct for data sent between processes on the
same computer or data sent between different computers on the
same network. Hence, it used by several types of operating
systems.
o File:-
A file is a type of data record or a document stored on the disk
and can be acquired on demand by the file server. Another most
important thing is that several processes can access that file as
required or needed.
o Signal:-
As its name implies, they are a type of signal used in inter
process communication in a minimal way. Typically, they are
the massages of systems that are sent by one process to another.
74

Therefore, they are not used for sending data but for remote
commands between multiple processes.
Usually, they are not used to send the data but to remote
commands in between several processes.
Why we need interprocess communication?
There are numerous reasons to use inter-process
communication for sharing the data. Here are some of the most
important reasons that are given below:
o It helps to speedup modularity
o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other
and synchronize their actions as well.
75

SECTION-II

CPU Scheduling

Basic concepts of CPU Scheduling:

CPU Scheduling is a process that allows one process to use


the CPU while another process is delayed (in standby) due to
unavailability of any resources such as I / O etc, thus making
full use of the CPU. The purpose of CPU Scheduling is to
make the system more efficient, faster, and fairer.
Whenever the CPU becomes idle, the operating system must
select one of the processes in the line ready for launch. The
selection process is done by a temporary (CPU) scheduler. The
Scheduler selects between memory processes ready to launch
and assigns the CPU to one of them.

In the uniprogrammming systems like MS DOS, when a


process waits for any I/O operation to be done, the CPU
remains idol. This is an overhead since it wastes the time and
causes the problem of starvation. However, In
Multiprogramming systems, the CPU doesn't remain idle
during the waiting time of the Process and it starts executing
other processes. Operating System has to define which process
the CPU will be given.
In Multiprogramming systems, the Operating system
schedules the processes on the CPU to have the maximum
utilization of it and this procedure is called CPU scheduling.
The Operating System uses various scheduling algorithm to
schedule the processes.
76

This is a task of the short term scheduler to schedule the CPU


for the number of processes present in the Job Pool. Whenever
the running process requests some IO operation then the short
term scheduler saves the current context of the process (also
called PCB) and changes its state from running to waiting.
During the time, process is in waiting state; the Short term
scheduler picks another process from the ready queue and
assigns the CPU to this process. This procedure is
called context switching.
What is the need for CPU scheduling algorithm?
CPU scheduling is the process of deciding which process will
own the CPU to use while another process is suspended. The
main function of the CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected
one of the processes available in the ready-to-use line.

What is saved in the Process Control Block?


The Operating system maintains a process control block during
the lifetime of the process. The Process control block is deleted
when the process is terminated or killed. There is the following
information which is saved in the process control block and is
changing with the state of the process.
77

Why do we need Scheduling?


In Multiprogramming, if the long term scheduler picks more
I/O bound processes then most of the time, the CPU remains
idol. The task of Operating system is to optimize the utilization
of resources.
If most of the running processes change their state from running
to waiting then there may always be a possibility of deadlock
in the system. Hence to reduce this overhead, the OS needs to
schedule the jobs to get the optimal utilization of CPU and to
avoid the possibility to deadlock.

Scheduling criteria:

Different CPU scheduling algorithms have different


properties and the choice of a particular algorithm depends on
78

the various factors. Many criteria have been suggested for


comparing CPU scheduling algorithms.
The criteria include the following:
1. CPU utilisation: The main objective of any CPU
scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but
in a real-time system, it varies from 40 to 90 percent
depending on the load upon the system.
2. Throughput:A measure of the work done by the CPU is the
number of processes being executed and completed per unit
of time. This is called throughput. The throughput may vary
depending upon the length or duration of the processes.
3. Turnaround time: For a particular process, an important
criterion is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the
time of completion is known as the turnaround time. Turn-
around time is the sum of times spent waiting to get into
memory, waiting in the ready queue, executing in CPU, and
waiting for I/O. The formula to calculate Turn Around Time
= Compilation Time – Arrival Time.
4. Waiting time: A scheduling algorithm does not affect the
time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e.
time spent by a process waiting in the ready queue. The
formula for calculating Waiting Time = Turnaround Time –
Burst Time.
5. Response time: In an interactive system, turn-around time
is not the best criteria. A process may produce some output
fairly early and continue computing new results while
previous results are being output to the user. Thus another
criteria is the time taken from submission of the process of
request until the first response is produced. This measure is
called response time. The formula to calculate Response
79

Time = CPU Allocation Time(when the CPU was allocated


for the first) – Arrival Time
6. Completion time: This is the time when the process
completes its execution.

Scheduling algorithms : FCFS, SJF, Round Robin &


Queue Algorithms.

Scheduling Algorithm:

There are various algorithms which are used by the Operating


System to schedule the processes on the processor in an
efficient way.
The Purpose of a Scheduling algorithm
1. Maximum CPU utilization
2. Fare allocation of CPU
3. Maximum throughput
4. Minimum turnaround time
5. Minimum waiting time
6. Minimum response time
There are the following algorithms which can be used to
schedule the jobs.
1. First Come First Serve
It is the simplest algorithm to implement. The process with the
minimal arrival time will get the CPU first. The lesser the
arrival time, the sooner will the process gets the CPU. It is the
non-preemptive type of scheduling.
80

2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a
time quantum (slice). All the processes will get executed in the
cyclic way. Each of the process will get the CPU for a small
amount of time (called time quantum) and then get back to the
ready queue to wait for its next turn. It is a preemptive type of
scheduling.
3. Shortest Job First
The job with the shortest burst time will get the CPU first. The
lesser the burst time, the sooner will the process get the CPU.
It is the non-preemptive type of scheduling.
4. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS
schedules the Job according to the remaining time of the
execution.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the
processes. The higher the priority, the sooner will the process
get the CPU. If the priority of the two processes is same then
they will be scheduled according to their arrival time.
6. Highest Response Ratio Next
In this scheduling Algorithm, the process with highest response
ratio will be scheduled next. This reduces the starvation in the
system.
81

FCFS Algorithm:

First come first serve (FCFS) scheduling algorithm simply


schedules the jobs according to their arrival time. The job
which comes first in the ready queue will get the CPU first. The
lesser the arrival time of the job, the sooner will the job get the
CPU. FCFS scheduling may cause the problem of starvation if
the burst time of the first process is the longest among all the
jobs.
Advantages of FCFS
o Simple
o Easy
o First come, First serv
Disadvantages of FCFS
1. The scheduling method is non preemptive, the process
will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the
problem of starvation may occur.
3. Although it is easy to implement, but it is poor in
performance since the average waiting time is higher as
compare to other scheduling algorithms.
Example
Let's take an example of The FCFS scheduling algorithm. In
the Following schedule, there are 5 processes with process
ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1,
P2 at time 2, P3 arrives at time 3 and Process P4 arrives at time
82

4 in the ready queue. The processes and their respective Arrival


and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by
using the following formula.
1. Turn Around Time = Completion Time - Arrival Time
2. Waiting Time = Turnaround time - Burst Time
The average waiting Time is determined by summing the
respective waiting time of all the processes and divided the sum
by the total number of processes.

Proces Arriva Burs Completio Turn Waitin


s ID l Time t n Time Aroun g Time
Tim d Time
e

0 0 2 2 2 0

1 1 6 8 7 1

2 2 4 12 10 6

3 3 9 21 18 9

4 6 12 33 29 17

Avg Waiting Time=31/5


83

(Gantt chart)

SJF Scheduling Algorithm:

The shortest job first (SJF) or shortest job next, is a scheduling


policy that selects the waiting process with the smallest
execution time to execute next. SJN, also known as Shortest
Job Next (SJN), can be preemptive or non-preemptive.

Characteristics of SJF Scheduling:

 Shortest Job first has the advantage of having a minimum


average waiting time among all scheduling algorithms.
 It is a Greedy Algorithm.
 It may cause starvation if shorter processes keep coming.
This problem can be solved using the concept of ageing.
 It is practically infeasible as Operating System may not
know burst times and therefore may not sort them. While it
is not possible to predict execution time, several methods
can be used to estimate the execution time for a job, such as
a weighted average of previous execution times.
 SJF can be used in specialized environments where accurate
estimates of running time are available.

Algorithm:
 Sort all the processes according to the arrival time.

 Then select that process that has minimum arrival time and

minimum Burst time.


 After completion of the process make a pool of processes

that arrives afterward till the completion of the previous


process and select that process among the pool which is
having minimum Burst time.
84

Till now, we were scheduling the processes according to their


arrival time (in FCFS scheduling). However, SJF scheduling
algorithm, schedules the processes according to their burst
time.
In SJF scheduling, the process with the lowest burst time,
among the list of available processes in the ready queue, is
going to be scheduled next.
However, it is very difficult to predict the burst time needed for
a process hence this algorithm is very difficult to implement in
the system.
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a
process can't be known in advance.
There are different techniques available by which, the CPU
burst time of the process can be determined. We will discuss
them later in detail.
Example
In the following example, there are five jobs named as P1, P2,
P3, P4 and P5. Their arrival time and burst time are given in the
table below.
85

PI Arriva Burs Completio Turn Waitin


D l Time t n Time Aroun g Time
Time d Time

1 1 7 8 7 0

2 3 3 13 10 7

3 6 2 10 4 2

4 7 10 31 24 14

5 9 8 21 12 4

Since, No Process arrives at time 0 hence; there will be an


empty slot in the Gantt chart from time 0 to 1 (the time at
which the first process arrives).
According to the algorithm, the OS schedules the process
which is having the lowest burst time among the available
processes in the ready queue.
Till now, we have only one process in the ready queue hence
the scheduler will schedule this to the processor no matter what
is its burst time.
This will be executed till 8 units of time. Till then we have three
more processes arrived in the ready queue hence the scheduler
will choose the process with the lowest burst time.
Among the processes given in the table, P3 will be executed
next since it is having the lowest burst time among all the
available processes.
86

So that's how the procedure will go on in shortest job first


(SJF) scheduling algorithm.

Avg Waiting Time = 27/5

Round Robin Algoritm:

Round Robin scheduling algorithm is one of the most popular


scheduling algorithm which can actually be implemented in
most of the operating systems. This is the preemptive
version of first come first serve scheduling. The Algorithm
focuses on Time Sharing. In this algorithm, every process gets
executed in a cyclic way. A certain time slice is defined in the
system which is called time quantum. Each process present in
the ready queue is assigned the CPU for that time quantum, if
the execution of the process is completed during that time then
the process will terminate else the process will go back to
the ready queue and waits for the next turn to complete the
execution.
87

Characteristics of Round Robin CPU Scheduling


Algorithm:

 It is simple, easy to implement, and starvation-free as all


processes get fair share of CPU.
 One of the most commonly used technique in CPU
scheduling as a core.
 It is preemptive as processes are assigned CPU only for a
fixed slice of time at most.
 The disadvantage of it is more overhead of context
switching.
88

Advantages
1. It can be actually implementable in the system because it
is not depending on the burst time.
2. It doesn't suffer from the problem of starvation or convoy
effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time
in the system.
2. The lower the time quantum, the higher the context
switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult
task in the system.
Examples to show working of Round Robin Scheduling
Algorithm:

Example-1: Consider the following table of arrival time and


burst time for five processes P1, P2, P3, and P4 and
given Time Quantum = 2

Process Burst Time Arrival Time

P1 5 ms 0 ms

P2 4 ms 1 ms

P3 2 ms 2 ms
89

Process Burst Time Arrival Time

P4 1 ms 4 ms

The Round Robin CPU Scheduling Algorithm will work on


the basis of steps as mentioned below:
At time = 0,
 The execution begins with process P1, which has burst time

5.
 Here, every process executes for 2 seconds (Time

Quantum Period). P2 and P3 are still in the waiting queue.


Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

0- P2,
2ms P1 0ms P3 P1 2ms 5ms 3ms

At time = 2,
 The processes P2 and P3 arrives in the ready queue and P2

starts executing for TQ period


90

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P1 0ms 0ms 5ms 3ms


2- P3,
4ms P2 1ms P1 P2 2ms 4ms 2ms

At time = 4,
 The process P4 arrives in the ready queue,

 Then P3 executes for TQ period.

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P1 0ms 0ms 5ms 3ms

P2 1ms P1, 0ms 4ms 2ms


4- P4,
6ms P3 2ms P2 P3 2ms 2ms 0ms

At time = 6,
 Process P3 completes its execution
91

 Process P1 starts executing for TQ period as it is next in the


b.
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P1 0ms 2ms 3ms 1ms


6- P4,
8ms P2 1ms P2 P1 0ms 4ms 2ms

At time = 8,
 Process P4 starts executing, it will not execute for Time

Quantum period as it has burst time = 1


 Hence, it will execute for only 1ms.

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P1 0ms 0ms 3ms 1ms


8- P2,
9ms P2 1ms P1 P4 0ms 4ms 2ms
92

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P4 4ms 1ms 1ms 0ms

At time = 9,
 Process P4 completes its execution

 Process P2 starts executing for TQ period as it is next in

the ready queue


Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P1 0ms 0ms 3ms 1ms


9-
11ms P2 1ms P1 P2 2ms 2ms 0ms

At time = 11,
 Process P2 completes its execution.

 Process P1 starts executing, it will execute for 1ms only


93

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

11-
12ms P1 0ms P1 1ms 1ms 0ms

At time = 12,
 Process P1 completes its execution.

 The overall execution of the processes will be as shown

below:
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

0- P2, 5m
2ms P1 0ms P3 P1 2ms s 3ms

P1 0ms 0ms 5ms 3ms

2- P3, 4m
4ms P2 1ms P1 P2 2ms s 2ms
94

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

P1 0ms 0ms 5ms 3ms

P2 1ms 0ms 4ms 2ms


P1,
4- P4, 2m
6ms P3 2ms P2 P3 2ms s 0ms

3m
P1 0ms 2ms s 1ms
6- P4,
8ms P2 1ms P2 P1 0ms 4ms 2ms

P1 0ms 0ms 3ms 1ms

P2 1ms 0ms 4ms 2ms

8- P2, 1m
9ms P4 4ms P1 P4 1ms s 0ms

P1 0ms P1 P2 0ms 3ms 1ms


95

Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time

9- 2m
11ms P2 1ms 2ms s 0ms

11- 1m
12ms P1 0ms P1 1ms s 0ms

Gantt chart will be as following below:

Gantt chart for Round Robin Scheduling Algorithm

How to compute below times in Round Robin using a


program?

 Completion Time: Time at which process completes its


execution.
 Turn Around Time: Time Difference between completion
time and arrival time. Turn Around Time = Completion
Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn
around time and burst time.
Waiting Time = Turn Around Time – Burst Time
96

Now, lets calculate average waiting time and turn around time:
Processes AT BT CT TAT WT

P1 0 5 12 12-0 = 12 12-5 = 7

P2 1 4 11 11-1 = 10 10-4 = 6

P3 2 2 6 6-2 = 4 4-2 = 2

P4 4 1 9 9-4 = 5 5-1 = 4

Now,
 Average Turn around time = (12 + 10 + 4 + 5)/4 = 31/4 =
7.7
 Average waiting time = (7 + 6 + 2 + 4)/4 = 19/4 = 4.7

Example 2: Consider the following table of arrival time and


burst time for four processes P1, P2 and P3 and given Time
Quantum = 2

Process Burst Time Arrival Time

P1 10 ms 0 ms

P2 5 ms 0 ms

P3 8 ms 0 ms

Similarly, Gantt chart for this example:


97

Gantt chart for example 2

Now, lets calculate average waiting time and turn around time:

Processes AT BT CT TAT WT

P1 0 10 23 23-0 = 23 23-10 = 13

P2 0 5 15 15-0 = 15 15-5 = 10

P3 0 8 21 21-0 = 21 21-8 = 113

Total Turn Around Time = 59 ms


So, Average Turn Around Time = 59/3 = 19.667 ms
And, Total Waiting Time = 36 ms
So, Average Waiting Time = 36/3 = 12.00 ms

Queue Algorithms:

Multilevel Queue Scheduling


Each algorithm supports a different process, but in a general
system, some processes require scheduling using a priority
algorithm. While some processes want to stay in the
system (interactive processes), others are background
processes whose execution can be delayed.
The number of ready queue algorithms between queues and
within queues may differ between systems. A round-robin
98

method with various time quantum is typically utilized for such


maintenance. Several types of scheduling algorithms are
designed for circumstances where the processes can be readily
separated into groups. There are two sorts of processes that
require different scheduling algorithms because they have
varying response times and resource requirements. The
foreground (interactive) and background processes (batch
process) are distinguished. Background processes take priority
over foreground processes.
The ready queue has been partitioned into seven different
queues using the multilevel queue scheduling technique. These
processes are assigned to one queue based on their priority,
such as memory size, process priority, or type. The method for
scheduling each queue is different. Some queues are utilized
for the foreground process, while others are used for the
background process. The foreground queue may be scheduled
using a round-robin method, and the background queue
can be scheduled using an FCFS strategy.
Advantages and Disadvantages of Multilevel Queue
Scheduling
There are various advantages and disadvantages of multilevel
queue scheduling. Some of the advantages and disadvantages
of the multilevel queue scheduling are as follows:
Advantages
1. You can use multilevel queue scheduling to apply
different scheduling methods to distinct processes.
2. It will have low overhead in terms of scheduling.
99

Disadvantages
1. There is a risk of starvation for lower priority processes.
2. It is rigid in nature.
Example
Let's take an example of a multilevel queue-scheduling
algorithm with five queues to understand how this scheduling
works:

1. System process
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Every queue would have an absolute priority over the low-
priority queues. No process may execute until the high-priority
queues are empty. In the above instance, no other process may
execute until and unless the queues for system, interactive, and
100

editing processes are empty. If an interactive editing process


enters the ready queue while a batch process is underway, the
batch process will be preempted.
There are the descriptions of the processes that are used in the
above example:
System Process
The OS has its process to execute, which is referred to as the
System Process.
Interactive Process
It is a process in which the same type of interaction should
occur.
Batch Process
Batch processing is an operating system feature that collects
programs and data into a batch before processing starts.
Student Process
The system process is always given the highest priority,
whereas the student processes are always given the lowest.
Example Problem
Let's take an example of a multilevel queue-scheduling (MQS)
algorithm that shows how the multilevel queue scheduling
work. Consider the four processes listed in the table below
under multilevel queue scheduling. The queue number denotes
the process's queue.
101

Process Arrival CPU Burst Queue


Time Time Number

P1 0 4 1

P2 0 3 1

P3 0 8 2

P4 10 5 4

Queue 1 has a higher priority than queue 2. Round Robin is


used in queue 1 (Time Quantum = 2), while FCFS is used in
queue 2.

Working:
1. Both queues have been processed at the start.
Therefore, queue 1 (P1, P2) runs first (due to greater
priority) in a round-robin way and finishes after 7 units.
2. The process in queue 2 (Process P3) starts running (since
there is no process in queue 1), but while it is executing,
P4 enters queue 1 and interrupts P3, and then P3 takes the
CPU and finishes its execution.
Multilevel Feedback Scheduling
102

Each algorithm supports a different process, but some


processes require scheduling using a priority algorithm in a
general system. There is a different queue for foreground or
background operations, but they do not switch between queues
or change their foreground or background nature; this type of
organization benefits from low scheduling but is inflexible.
This strategy prioritizes operations that require I/O and are
interactive. It is a distinct process with a distinct CPU burst
time. It enables a process to switch between queues. If a process
consumes too much processor time, it will be switched to the
lowest priority queue. A process waiting in a lower priority
queue for too long may be shifted to a higher priority queue.
This type of aging prevents starvation.
The parameters of the multilevel feedback queue scheduler are
as follows:
1. The scheduling algorithm for every queue in the system.
2. The queues number in the system.
3. The method for determining when a queue should be
demoted to a lower-priority queue.
4. When a process is upgraded to a higher-priority queue,
this process determines when it gets upgraded.
5. The method for determining which processes will enter
the queue and when those processes will require service
103

CHAPTER-4

DEADLOCKS

Deadlock characterization:

A deadlock happens in operating system when two or more


processes need some resource to complete their execution that
is held by the other process.
Every process needs some resources to complete its execution.
However, the resource is granted in a sequential order.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the
process waits.
3. The process uses it and release on the completion.
A Deadlock is a situation where each of the computer process
waits for a resource which is being assigned to some another
process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which
is also waiting for some other resource to be released.
Let us assume that there are three processes P1, P2 and P3.
There are three different resources R1, R2 and R3. R1 is
assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by
P2. P1 halts its execution since it can't complete without R2. P2
also demands for R3 which is being used by P3. P2 also stops
its execution because it can't continue without R3. P3 also
104

demands for R1 which is being used by P1 therefore P3 also


stops its execution.
In this scenario, a cycle is being formed among the three
processes. None of the process is progressing and they are all
waiting. The computer becomes unresponsive since all the
processes got blocked.

Difference between Starvation and Deadlock

Sr. Deadlock Starvation

1 Deadlock is a situation Starvation is a situation


where no process got where the low priority
blocked and no process process got blocked and
proceeds the high priority
processes proceed.

2 Deadlock is an infinite Starvation is a long


waiting. waiting but not infinite.

3 Every Deadlock is always a Every starvation need


starvation. not be deadlock.
105

4 The requested resource is The requested resource


blocked by the other is continuously be used
process. by the higher priority
processes.

5 Deadlock happens when It occurs due to the


Mutual exclusion, hold and uncontrolled priority and
wait, No preemption and resource management.
circular wait occurs
simultaneously.

Necessary conditions for Deadlocks


1. Mutual Exclusion
A resource can only be shared in mutually exclusive
manner. It implies, if two process cannot use the same
resource at the same time.
2. Hold and Wait
A process waits for some resources while holding another
resource at the same time.
3. No preemption
The process which once scheduled will be executed till the
completion. No other process can be scheduled by the
scheduler meanwhile.
4. Circular Wait
106

All the processes must be waiting for the resources in a


cyclic manner so that the last process is waiting for the
resource which is being held by the first process.
Methods for handling deadlocks

Deadlock characterization describes the distinctive features


that are the cause of deadlock occurrence. Deadlock is a
condition in the multiprogramming environment where the
executing processes get stuck in the middle of execution
waiting for the resources that have been held by the other
waiting processes thereby preventing the execution of the
processes.
In this content, we will discuss the characteristics that are
essential for the occurrence of deadlock. The four conditions
that must sustain at the same time to eventuate a deadlock are:
mutual experience, hold and wait, no preemption, circular wait.

Deadlock Characterization
1. Deadlock Prerequisites
2. Systems Resource Allocation Graph
Deadlock Prerequisites
i) Mutual Exclusion
In a multiprogramming environment, there may be several
processes requesting the same resource at a time. The mutual
exclusion condition, allow only a single process to access the
resource at a time. While the other processes requesting the
same resource must wait and delay their execution until it has
been released. The mutual exclusion condition prevents two or
more processes to access the same resource at a time.
107

ii) Hold and wait


The hold and wait condition simply means that the process
must be holding access to one resource and must be waiting to
get hold of other resources that have been acquired by the other
processes.

iii) No Preemption
A process acquiring a resource, cannot be preempted in
between, to release the acquired resource. Instead, the process
must voluntarily release the resource it has acquired when the
task of the process has been completed.
iv) Circular Wait
The processes must be waiting in a circular pattern to acquire
the resource. This is similar to hold and waits the only
difference is that the processes are waiting in a circular pattern.
Let us discuss this with an example there are five processes i.e.
P0, P1, P2, P3, P4. Now the process P0 is waiting to acquire the
process held by the P1, further the process P 1 is waiting to
acquire the resource held by P 2, …., process P4 is waiting to
acquire the resource held by P 0.

System Resource Allocation Graph


The system reallocation graph is a directed graph that briefs
you about the deadlock more precisely. Like every graph, it
also has a set of vertices and a set of edges. Further, the set of
vertices can be classified into two types of nodes P and R.
Where P is the set of vertices indicating the set of active
processes and R is the set of vertices indicating all types of
resources in the system.
108

When a process requests for a resource it denoted by


the request edge in the resource-allocation graph. The request
edge is a directed edge from the requesting process P i to
requested resource Rj i.e. Pi -> Rj.
Well, when a resource is allotted to some process then it is
denoted by the assignment edge. The assignment edge is the
directed edge from the instance of resource Rj to the process
Pi i.e. Rj -> Pi.
In the graph, resources are denoted by the rectangles and the
processes are denoted by the circles. If a resource has multiple
instances then it is denoted by the dots inside the rectangle.
When a process request for an instance of the resource it directs
a request edge to the resource. If the resource is able to allocate
the resource instance to the requesting process then
immediately the request edge is converted to assignment edge.
The request edge always points to the resource rectangle in the
graph, not to dots (instance) inside the rectangle. Although the
assignment edge nominates the dot (instance) to a process.
To understand deadlock we let us take an example. Consider
we have following set of nodes and edges.
1. There are three active processes P = {P 1, P2, P3}
2. There are four resources R = { R1, R2, R3, R4}
3. The set of request edge and assignment edges we have
E = { P1 -> R1, P2 -> R3, R1 -> P2, R2 -> P2, R2 -> P1, R3 -> P3}
Observe the figure above that the resource R1 has only one
instance, resource R2 has two instances, resource R3 has one
instance, and resource R4 has three instances. Let us check the
status of the processes.
109

The figure below shows that the process P 1 has requested for
the instance of resource R1 is already holding the instance of
resource R2. The process P2 has requested for the instance of
resource R3 and is already holding the instances of resource
R1 and R3. The process P3 has not requested for any resource
instance but is holding the instance for resource R3.

Remember if the resource allocation graph has a cycle and


every resource has a single instance then it implies that a
deadlock has occurred. In case, the resources have multiple
instances then a cycle in the graph need not be indicating the
occurrence of deadlock.
Consider that the process P 3 is requesting for the instance of
resource R2 which is already held by the process P 1 and P2. In
this case, you will observe that there are two cycles in the
resource allocation graph:
 P1 -> R 1 -> P2 -> R3 -> P3 -> R2 -> P1
110

 P2 -> R3 -> P3 -> R2 -> P2

Process P1, P2 and P3 are now in deadlock as each process in


the cycle is waiting for the resource held by another process.
But every cycle in the resource allocation graph does not
indicate the deadlock, you have to observe the cycle carefully
while dealing with deadlock problem. So, this is how you can
characterize the deadlock in the system.

Banker’sAlgorithm:

It is a banker algorithm used to avoid deadlock and allocate


resources safely to each process in the computer system. The
'S-State' examines all possible tests or activities before
deciding whether the allocation should be allowed to each
process. It also helps the operating system to successfully share
111

the resources between all the processes. The banker's algorithm


is named because it checks whether a person should be
sanctioned a loan amount or not to help the bank system safely
simulate allocation resources. In this section, we will learn
the Banker's Algorithm in detail. Also, we will solve
problems based on the Banker's Algorithm. To understand the
Banker's Algorithm first we will see a real word example of it.
Suppose the number of account holders in a particular bank is
'n', and the total money in a bank is 'T'. If an account holder
applies for a loan; first, the bank subtracts the loan amount from
full cash and then estimates the cash difference is greater than
T to approve the loan amount. These steps are taken because if
another person applies for a loan or withdraws some amount
from the bank, it helps the bank manage and operate all things
without any restriction in the functionality of the banking
system.
Similarly, it works in an operating system. When a new
process is created in a computer system, the process must
provide all types of information to the operating system like
upcoming processes, requests for their resources, counting
them, and delays. Based on these criteria, the operating system
decides which process sequence should be executed or waited
so that no deadlock occurs in a system. Therefore, it is also
known as deadlock avoidance algorithm or deadlock
detection in the operating system.
Why Banker’s algorithm is named so?

Banker’s algorithm is named so because it is used in banking


system to check whether loan can be sanctioned to a person or
not. Suppose there are n number of account holders in a bank
and the total sum of their money is S. If a person applies for a
112

loan then the bank first subtracts the loan amount from the total
money that bank has and if the remaining amount is greater
than S then only the loan is sanctioned. It is done because if
all the account holders comes to withdraw their money then
the bank can easily do it.
Advantages
Following are the essential characteristics of the Banker's
algorithm:
1. It contains various resources that meet the requirements of
each process.
2. Each process should provide information to the operating
system for upcoming resource requests, the number of
resources, and how long the resources will be held.
3. It helps the operating system manage and control process
requests for each type of resource in the computer system.
4. The algorithm has a Max resource attribute that represents
indicates each process can hold the maximum number of
resources in a system.
Disadvantages
1. It requires a fixed number of processes, and no additional
processes can be started in the system while executing the
process.
2. The algorithm does no longer allows the processes to
exchange its maximum needs while processing its tasks.
3. Each process has to know and state their maximum
resource requirement in advance for the system.
4. The number of resource requests can be granted in a finite
time, but the time limit for allocating the resources is one
year.
113

When working with a banker's algorithm, it requests to know


about three things:
1. How much each process can request for each resource in
the system. It is denoted by the [MAX] request.
2. How much each process is currently holding each resource
in a system. It is denoted by the [ALLOCATED]
resource.
3. It represents the number of each resource currently
available in the system. It is denoted by the
[AVAILABLE] resource.
Following are the important data structures terms applied in the
banker's algorithm as follows:
Suppose n is the number of processes, and m is the number of
each type of resource used in a computer system.
1. Available: It is an array of length 'm' that defines each
type of resource available in the system. When
Available[j] = K, means that 'K' instances of Resources
type R[j] are available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i]
can store the maximum number of resources R[j] (each
type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the
type of resources currently allocated to each process in the
system. When Allocation [i, j] = K, it means that process
P[i] is currently allocated K instances of Resources type
R[j] in the system.
4. Need: It is an M x N matrix sequence representing the
number of remaining resources for each process. When
the Need[i] [j] = k, then process P[i] may require K more
instances of resources type Rj to complete the assigned
114

work.
Nedd[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a
Boolean value (true/false) indicating whether the process
has been allocated to the requested resources, and all
resources have been released after finishing its task.
The Banker's Algorithm is the combination of the safety
algorithm and the resource request algorithm to control the
processes and avoid deadlock in a system:
115

SECTION-III

MEMORY MANAGEMENT
Logical versus Physical address space:

Logical Address is generated by CPU while a program is


running. The logical address is virtual address as it does not
exist physically, therefore, it is also known as Virtual
Address. This address is used as a reference to access the
physical memory location by CPU. The term Logical
Address Space is used for the set of all logical addresses
generated by a program’s perspective.
The hardware device called Memory-Management Unit is
used for mapping logical address to its corresponding
physical address.
Physical Address identifies a physical location of required
data in a memory. The user never directly deals with the
physical address but can access by its corresponding logical
address. The user program generates the logical address and
thinks that the program is running in this logical address but
the program needs physical memory for its execution,
therefore, the logical address must be mapped to the physical
address by MMU before they are used. The term Physical
Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address
space.
116

Mapping virtual-address to physical-addresses


Differences Between Logical and Physical Address in
Operating System

1. The basic difference between Logical and physical address


is that Logical address is generated by CPU in perspective
of a program whereas the physical address is a location
that exists in the memory unit.
2. Logical Address Space is the set of all logical addresses
generated by CPU for a program whereas the set of all
physical address mapped to corresponding logical
addresses is called Physical Address Space.
3. The logical address does not exist physically in the
memory whereas physical address is a location in the
memory that can be accessed physically.
4. Identical logical addresses are generated by Compile-time
and Load time address binding methods whereas they
differs from each other in run-time address binding
method. Please refer this for details.
117

5. The logical address is generated by the CPU while the


program is running whereas the physical address is
computed by the Memory Management Unit (MMU).
Comparison Chart:

LOGICAL PHYSICAL
Parameter ADDRESS ADDRESS

location in a memory
Basic generated by CPU unit

Logical Address
Space is set of all Physical Address is set
logical addresses of all physical
generated by CPU in addresses mapped to
Address reference to a the corresponding
Space program. logical addresses.

User can view the User can never view


logical address of a physical address of
Visibility program. program.

generated by the
Generation CPU Computed by MMU

The user can use the The user can indirectly


Access logical address to access physical
118

LOGICAL PHYSICAL
Parameter ADDRESS ADDRESS

access the physical address but not


address. directly.

Logical address can Physical address will


Editable be change. not change.

Also called virtual address. real address.

Mapping Virtual Addresses to Physical Addresses


Memory consists of large array addresses. It is the
responsibility of the CPU to fetch the instruction address from
the program counter. These instructions may cause loading or
storage to a specific memory address.

Address binding is the process of mapping from one address


space to another address space. Logical addresses are generated
by the CPU during execution, whereas physical address refers
to the location in a physical memory unit (the one loaded into
119

memory). Note that users deal only with logical addresses. The
MMU translates the logical address. The output of this process
is the appropriate physical address of the data in RAM. An
address binding can be done in three different ways:
1. Compile Time: An absolute address can be generated if
you know where a process will reside in memory at
compile time. That is, a physical address is generated in
the program executable during compilation. Loading such
an executable into memory is very fast. But if another
process occupies the generated address space, then the
program crashes, and it becomes necessary to recompile
the program to use virtual address space.
2. Load Time: If it is not known at the compile time where
the process will reside, then relocated addresses will be
generated. The loader translates the relocated address to
an absolute address. The base address of the process in the
main memory is added to all logical addresses by the
loader to generate the absolute address. If the base address
of the process changes, then we need to reload the process
again.
3. Execution Time:The instructions are already loaded into
memory and are processed by the CPU. Additional
memory may be allocated or reallocated at this time. This
process is used if the process can be moved from one
memory to another during execution (dynamic linking
done during load or run time). e.g., - Compaction.
What is Memory Management Unit (MMU)
The run-time mapping between the virtual and physical
addresses is done by a hardware device known as MMU. The
operating system will handle the processes and move the
processes between disk and memory in memory management.
120

It keeps track of available and used memory. The Memory


Management Unit is a combination of these two registers,
1. Base Register: It contains the starting physical address of
the process.
2. Limit Register: It mentions the limit relative to the base
address on the region occupied by the process
Swapping:

Swapping is a memory management scheme in which any


process can be temporarily swapped from main memory to
secondary memory so that the main memory can be made
available for other processes. It is used to improve main
memory utilization. In secondary memory, the place where the
swapped-out process is stored is called swap space.
The purpose of the swapping in operating system is to access
the data present in the hard disk and bring it to RAM so that the
application programs can use it. The thing to remember is that
swapping is used only when data is not present in RAM.
Although the process of swapping affects the performance of
the system, it helps to run larger and more than one process.
This is the reason why swapping is also referred to as memory
compaction.
Though performance is usually affected by swapping process
but it helps in running multiple and big processes in parallel
and that's the reason Swapping is also known as a technique
for memory compaction.
121

The concept of swapping has divided into two more concepts:


Swap-in and Swap-out.
o Swap-out is a method of removing a process from RAM
and adding it to the hard disk.
o Swap-in is a method of removing a program from a hard
disk and putting it back into the main memory or RAM.
Example: Suppose the user process's size is 2048KB and is a
standard hard disk where swapping has a data transfer rate of
122

1Mbps. Now we will calculate how long it will take to transfer


from main memory to secondary memory.
1. User process size is 2048Kb
2. Data transfer rate is 1Mbps = 1024 kbps
3. Time = process size / transfer rate
4. = 2048 / 1024
5. = 2 seconds
6. = 2000 milliseconds
7. Now taking swap-in and swap-
out time, the process will take 4000 milliseconds.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a
single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks
simultaneously. Therefore, processes do not have to wait
very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all
information related to the program in case of substantial
swapping activity.
2. If the swapping algorithm is not good, the composite
method can increase the number of Page Fault and
decrease the overall processing performance.
Note:
123

o In a single tasking operating system, only one process


occupies the user program area of memory and stays in
memory until the process is complete.
o In a multitasking operating system, a situation arises when
all the active processes cannot coordinate in the main
memory, then a process is swap out from the main
memory so that other processes can enter it.

Contiguous allocation:

If the blocks are allocated to the file in such a way that all the
logical blocks of the file get the contiguous physical block in
the hard disk then such allocation scheme is known as
contiguous allocation.
In the image shown below, there are three files in the directory.
The starting block and the length of each file are mentioned in
the table. We can check in the table that the contiguous blocks
are assigned to each file as per its need.
124

Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.
Difference between Contiguous and Noncontiguous
Memory Allocation

1. Contiguous Memory Allocation : Contiguous memory


allocation is basically a method in which a single contiguous
section/part of memory is allocated to a process or file needing
it. Because of this all the available memory space resides at
the same place together, which means that the freely/unused
available memory partitions are not distributed in a random
fashion here and there across the whole memory space.
125

The main memory is a combination of two main portions- one


for the operating system and other for the user program. We
can implement/achieve contiguous memory allocation by
dividing the memory partitions into fixed size partitions.
2. Non-Contiguous Memory Allocation : Non-Contiguous
memory allocation is basically a method on the contrary to
contiguous allocation method, allocates the memory space
present in different locations to the process as per it’s
requirements. As all the available memory space is in a
126

distributed pattern so the freely available memory space is also


scattered here and there. This technique of memory allocation
helps to reduce the wastage of memory, which eventually
gives rise to Internal and external fragmentation.
127

Difference between Contiguous and Non-contiguous


Memory Allocation :
Contiguous Memory Non-Contiguous
S.NO. Allocation Memory Allocation

Contiguous memory
allocation allocates Non-Contiguous memory
consecutive blocks of allocation allocates
memory to a separate blocks of
1. file/process. memory to a file/process.

2. Faster in Execution. Slower in Execution.

It is easier for the OS to It is difficult for the OS to


3. control. control.

Overhead is minimum
as not much address
translations are there More Overheads are there
while executing a as there are more address
4. process. translations.

Both Internal
fragmentation and
external fragmentation Only External
occurs in Contiguous fragmentation occurs in
memory allocation Non-Contiguous memory
5. method. allocation method.

It includes single It includes paging and


6. partition allocation and segmentation.
128

Contiguous Memory Non-Contiguous


S.NO. Allocation Memory Allocation

multi-partition
allocation.

Wastage of memory is No memory wastage is


7. there. there.

In contiguous memory In non-contiguous


allocation, swapped-in memory allocation,
processes are arranged swapped-in processes can
in the originally be arranged in any place
8. allocated space. in the memory.

It is of five types:
1. Paging
It is of two types: 2. Multilevel Paging
1. Fixed(or static) 3. Inverted Paging
partitioning 4. Segmentation
9. 2. Dynamic partitioning 5. Segmentated Paging

It could be visualized
and implemented using It could be implemented
10. Arrays. using Linked Lists.
129

Paging:

In Operating Systems, Paging is a storage mechanism used to


retrieve processes from the secondary storage into the main
memory in the form of pages.
The main idea behind the paging is to divide each process in
the form of pages. The main memory will also be divided in the
form of frames.
One page of the process is to be stored in one of the frames of
the memory. The pages can be stored at the different locations
of the memory but the priority is always to find the contiguous
frames or holes.
Pages of the process are brought into the main memory only
when they are required otherwise they reside in the secondary
storage.
Different operating system defines different frame sizes. The
sizes of each frame must be equal. Considering the fact that the
pages are mapped to the frames in Paging, page size needs to
be as same as frame size.
130

Example
Let us consider the main memory size 16 Kb and Frame size is
1 KB therefore the main memory will be divided into the
collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of
4 KB each. Each process is divided into pages of 1 KB each so
that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the
processes will get stored in the contiguous way.
Frames, pages and the mapping between the two is shown in
the image below.
131

Let us consider that, P2 and P4 are moved to waiting state after


some time. Now, 8 frames become empty and therefore other
pages can be loaded in that empty place. The process P5 of size
8 KB (8 pages) is waiting inside the ready queue.
Given the fact that, we have 8 non contiguous frames available
in the memory and paging provides the flexibility of storing the
process at the different places. Therefore, we can load the pages
of process P5 in the place of P2 and P4.
132

Memory Management Unit


The purpose of Memory Management Unit (MMU) is to
convert the logical address into the physical address. The
logical address is the address generated by the CPU for every
page while the physical address is the actual address of the
frame where each page will be stored.
133

When a page is to be accessed by the CPU by using the logical


address, the operating system needs to obtain the physical
address to access that page physically.
The logical address has two parts.
1. Page Number
2. Offset
Memory management unit of OS needs to convert the page
number to the frame number.
Example
Considering the above image, let's say that the CPU demands
10th word of 4th page of process P3. Since the page number 4
of process P1 gets stored at frame number 9 therefore the 10th
word of 9th frame will be returned as the physical address.

Segmentation:

In Operating Systems, Segmentation is a memory management


technique in which the memory is divided into the variable size
parts. Each part is known as a segment which can be allocated
to a process.
The details about each segment are stored in a table called a
segment table. Segment table is stored in one (or many) of the
segments.
Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
134

Why Segmentation is required?


Till now, we were using Paging as our main memory
management technique. Paging is more close to the Operating
system rather than the User. It divides all the processes into the
form of pages regardless of the fact that a process can have
some relative parts of functions which need to be loaded in the
same page.
Operating system doesn't care about the User's view of the
process. It may divide the same function into different pages
and those pages may or may not be loaded at the same time into
the memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into
the segments. Each segment contains the same type of
functions such as the main function can be included in one
segment and the library functions can be included in the other
segment.

Translation of Logical address into physical address by


segment table
135

CPU generates a logical address which contains two parts:


1. Segment Number
2. Offset
For Example:
Suppose a 16 bit address is used with 4 bits for the segment
number and 12 bits for the segment offset so the maximum
segment size is 4096 and the maximum number of segments
that can be refereed is 16.
When a program is loaded into memory, the segmentation
system tries to locate space that is large enough to hold the first
segment of the process, space information is obtained from the
free list maintained by memory manager. Then it tries to locate
space for other segments. Once adequate space is located for all
the segments, it loads them into their respective areas.
The operating system also generates a segment map table for
each program.
136

With the help of segment map tables and hardware assistance,


the operating system can easily translate a logical address into
physical address on execution of a program.
The Segment number is mapped to the segment table. The
limit of the respective segment is compared with the offset. If
the offset is less than the limit then the address is valid
otherwise it throws an error as the address is invalid.
In the case of valid addresses, the base address of the segment
is added to the offset to get the physical address of the actual
word in the main memory.
The above figure shows how address translation is done in case
of segmentation.
137

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page
table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable
sized partition.
3. Costly memory management algorithms.
138

CHAPTER-2

VIRTUAL MEMORY
Demand paging:

According to the concept of Virtual Memory, in order to


execute some process, only a part of the process needs to be
present in the main memory which means that only a few pages
will only be present in the main memory at any time.
However, deciding, which pages need to be kept in the main
memory and which need to be kept in the secondary memory,
is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called
Demand Paging is introduced. It suggests keeping all pages of
the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main
memory until it is required.
Whenever any page is referred for the first time in the main
memory, then that page will be found in the secondary memory.
After that, it may or may not be present in the main memory
depending upon the page replacement algorithm which will be
covered later in this tutorial.
What is a Page Fault?
If the referred page is not present in the main memory then
there will be a miss and the concept is called Page miss or page
fault.
139

The CPU has to access the missed page from the secondary
memory. If the number of page fault is very high then the
effective access time of the system will become very high.
What is Thrashing?
If the number of page faults is equal to the number of referred
pages or the number of page faults are so high so that the CPU
remains busy in just reading the pages from the secondary
memory then the effective access time will be the time taken
by the CPU to read one word from the secondary memory and
it will be so high. The concept is called thrashing.
If the page fault rate is PF %, the time taken in getting a page
from the secondary memory and again restarting is S (service
time) and the memory access time is ma then the effective
access time can be given as;
1. EAT = PF X S + (1 - PF) X (ma)

The difference between Demand Paging and Segmentation


are as follows:

S.No. Demand Paging Segmentation

In demand paging, the While in segmentation, segments


1. pages are of equal size. can be of different size.

Segment size may vary in


Page size is fixed in the segmentation as it grants dynamic
2. demand paging. increase of segments.
140

S.No. Demand Paging Segmentation

It does not allows While segments can be shared in


3. sharing of the pages. segmentation.

In demand paging, on In segmentation, during


demand pages are compilation segments are
4. loaded in the memory. allocated to the program.

Page map table in


demand paging manages Segment map table in
record of pages in segmentation demonstrates every
5. memory. segment address in the memory.

It provides large virtual It provides virtual memory and


memory and have more maximum size of segment is
6. efficient use of memory. defined by the size of memory.

Performance of demand paging:

Performance of Paging :

Evaluating of paging performance is one of the important


tasks. Consider the main memory access time is M and the
page table is stored in the main memory then the evaluating
expression for effective memory access time is as follows.
Effective Memory Access Time (E.M.A.T) = 2M
141

Features of Performance of Paging :


 Translation lookaside buffer(TLB) is added to improve the

performance of paging.
 The TLB is a hardware device implemented using

associative registers.
 TLB access time will be very less compared to the main

memory access time.


 TLB contains frequently referred page numbers and

corresponding frame numbers.


Now, let’s see the diagram given below of the performance of
paging for a better understanding.

Evaluating Expression for the performance of paging :

Consider the TLB access time is ‘c’. And the TLB hit ratio is
‘x’ then the Evaluating Expression for the performance of
paging is as follows.
Effective Memory Access Time (E.M.A.T) with TLB
= x(c+m) + (1-x) (c + 2 m)
142

Page replacement:

Page replacement is needed in the operating systems that use


virtual memory using Demand Paging. As we know that in
Demand paging, only a set of pages of a process is loaded into
the memory. This is done so that we can have more processes
in the memory at the same time.
When a page that is residing in virtual memory is requested
by a process for its execution, the Operating System needs to
decide which page will be replaced by this requested page.
This process is known as page replacement and is a vital
component in virtual memory management.

Page replacement algorithms:

The page replacement algorithm decides which memory page


is to be replaced. The process of replacement is sometimes
called swap out or write to disk. Page replacement is done when
the requested page is not found in the main memory (page
fault).
143

There are two main aspects of virtual memory, Frame


allocation and Page Replacement. It is very important to have
the optimal frame allocation and page replacement algorithm.
Frame allocation is all about how many frames are to be
allocated to the process while the page replacement is all about
determining the page number which needs to be replaced in
order to make space for the requested page.
What If the algorithm is not optimal?
1. if the number of frames which are allocated to a process is
not sufficient or accurate then there can be a problem of
thrashing. Due to the lack of frames, most of the pages will be
residing in the main memory and therefore more page faults
will occur.
However, if OS allocates more frames to the process then there
can be internal fragmentation.
2. If the page replacement algorithm is not optimal then there
will also be the problem of thrashing. If the number of pages
that are replaced by the requested pages will be referred in the
near future then there will be more number of swap-in and
144

swap-out and therefore the OS has to perform more


replacements then usual which causes performance deficiency.
Therefore, the task of an optimal page replacement algorithm
is to choose the page which can limit the thrashing.
Types of Page Replacement Algorithms
There are various page replacement algorithms. Each algorithm
has a different method by which the pages can be replaced.
1. Optimal Page Replacement algorithm → this
algorithms replaces the page which will not be referred for
so long in future. Although it can not be practically
implementable but it can be used as a benchmark. Other
algorithms are compared to this in terms of optimality.
2. Least recent used (LRU) page replacement algorithm
→ this algorithm replaces the page which has not been
referred for a long time. This algorithm is just opposite to
the optimal page replacement algorithm. In this, we look
at the past instead of staring at future.
3. FIFO → in this algorithm, a queue is maintained. The
page which is assigned the frame first will be replaced
first. In other words, the page which resides at the rare end
of the queue will be replaced on the every page fault.

Thrashing:

Thrashing is when the page fault and swapping happens very


frequently at a higher rate, and then the operating system has to
spend more time swapping these pages. This state in the
145

operating system is known as thrashing. Because of thrashing,


the CPU utilization is going to be reduced or negligible.

The basic concept involved is that if a process is allocated too


few frames, then there will be too many and too frequent page
faults. As a result, no valuable work would be done by the CPU,
and the CPU utilization would fall drastically.
The long-term scheduler would then try to improve the CPU
utilization by loading some more processes into the memory,
thereby increasing the degree of multiprogramming.
Unfortunately, this would result in a further decrease in the
CPU utilization, triggering a chained reaction of higher page
faults followed by an increase in the degree of
multiprogramming, called thrashing.
Algorithms during Thrashing
Whenever thrashing starts, the operating system tries to apply
either the Global page replacement Algorithm or the Local page
replacement algorithm.
146

1. Global Page Replacement


Since global page replacement can bring any page, it tries to
bring more pages whenever thrashing is found. But what
actually will happen is that no process gets enough frames, and
as a result, the thrashing will increase more and more.
Therefore, the global page replacement algorithm is not
suitable when thrashing happens.
2. Local Page Replacement
Unlike the global page replacement algorithm, local page
replacement will select pages which only belong to that
process. So there is a chance to reduce the thrashing. But it is
proven that there are many disadvantages if we use local page
replacement. Therefore, local page replacement is just an
alternative to global page replacement in a thrashing scenario.
Causes of Thrashing
Programs or workloads may cause thrashing, and it results in
severe performance problems, such as:
o If CPU utilization is too low, we increase the degree of
multiprogramming by introducing a new system. A global
page replacement algorithm is used. The CPU scheduler
sees the decreasing CPU utilization and increases the
degree of multiprogramming.
o CPU utilization is plotted against the degree of
multiprogramming.
o As the degree of multiprogramming increases, CPU
utilization also increases.
o If the degree of multiprogramming is increased further,
thrashing sets in, and CPU utilization drops sharply.
147

o So, at this point, to increase CPU utilization and to stop


thrashing, we must decrease the degree of
multiprogramming.
How to Eliminate Thrashing
Thrashing has some negative impacts on hard drive health and
system performance. Therefore, it is necessary to take some
actions to avoid it. To resolve the problem of thrashing, here
are the following methods, such as:
o Adjust the swap file size:If the system swap file is not
configured correctly, disk thrashing can also happen to
you.
o Increase the amount of RAM: As insufficient memory
can cause disk thrashing, one solution is to add more RAM
to the laptop. With more memory, your computer can
handle tasks easily and don't have to work excessively.
Generally, it is the best long-term solution.
o Decrease the number of applications running on the
computer: If there are too many applications running in
the background, your system resource will consume a lot.
And the remaining system resource is slow that can result
in thrashing. So while closing, some applications will
release some resources so that you can avoid thrashing to
some extent.
o Replace programs: Replace those programs that are
heavy memory occupied with equivalents that use less
memory.
Techniques to Prevent Thrashing
The Local Page replacement is better than the Global Page
replacement, but local page replacement has many
148

disadvantages, so it is sometimes not helpful. Therefore below


are some other techniques that are used to handle thrashing:
1. Locality Model
A locality is a set of pages that are actively used together. The
locality model states that as a process executes, it moves from
one locality to another. Thus, a program is generally composed
of several different localities which may overlap.
For example, when a function is called, it defines a new locality
where memory references are made to the function call
instructions, local and global variables, etc. Similarly, when the
function is exited, the process leaves this locality.
2. Working-Set Model
This model is based on the above-stated concept of the Locality
Model.
The basic principle states that if we allocate enough frames to
a process to accommodate its current locality, it will only fault
whenever it moves to some new locality. But if the allocated
frames are lesser than the size of the current locality, the
process is bound to thrash.
According to this model, based on parameter A, the working
set is defined as the set of pages in the most recent 'A' page
references. Hence, all the actively used pages would always end
up being a part of the working set.
The accuracy of the working set is dependent on the value of
parameter A. If A is too large, then working sets may overlap.
On the other hand, for smaller values of A, the locality might
not be covered entirely.
149

If D is the total demand for frames and WSSi is the working set
size for process i,
D = ⅀ WSSi
Now, if 'm' is the number of frames available in the memory,
there are two possibilities:
o D>m, i.e., total demand exceeds the number of frames,
then thrashing will occur as some processes would not get
enough frames.
o D<=m, then there would be no thrashing.
If there are enough extra frames, then some more processes can
be loaded into the memory. On the other hand, if the summation
of working set sizes exceeds the frames' availability, some of
the processes have to be suspended (swapped out of memory).
This technique prevents thrashing along with ensuring the
highest degree of multiprogramming possible. Thus, it
optimizes CPU utilization.
3. Page Fault Frequency
A more direct approach to handle thrashing is the one that uses
the Page-Fault Frequency concept.
150

The problem associated with thrashing is the high page fault


rate, and thus, the concept here is to control the page fault rate.
If the page fault rate is too high, it indicates that the process has
too few frames allocated to it. On the contrary, a low page fault
rate indicates that the process has too many frames.
Upper and lower limits can be established on the desired page
fault rate, as shown in the diagram.
If the page fault rate falls below the lower limit, frames can be
removed from the process. Similarly, if the page faults rate
exceeds the upper limit, more frames can be allocated to the
process.
In other words, the graphical state of the system should be kept
limited to the rectangular region formed in the given diagram.
If the page fault rate is high with no free frames, some of the
processes can be suspended and allocated to them can be
151

reallocated to other processes. The suspended processes can


restart later.
152

SECTION-IV

File Management

File system Structure:

File System provide efficient access to the disk by allowing


data to be stored, located and retrieved in a convenient way. A
file System must be able to store the file, locate the file and
retrieve the file.
Most of the Operating Systems use layering approach for every
task including file systems. Every layer of the file system is
responsible for some activities.
The image shown below, elaborates how the file system is
divided in different layers, and also the functionality of each
layer.
153

o When an application program asks for a file, the first


request is directed to the logical file system. The logical
file system contains the Meta data of the file and directory
structure. If the application program doesn't have the
required permissions of the file then this layer will throw
an error. Logical file systems also verify the path to the
file.
o Generally, files are divided into various logical blocks.
Files are to be stored in the hard disk and to be retrieved
154

from the hard disk. Hard disk is divided into various tracks
and sectors. Therefore, in order to store and retrieve the
files, the logical blocks need to be mapped to physical
blocks. This mapping is done by File organization module.
It is also responsible for free space management.
o Once File organization module decided which physical
block the application program needs, it passes this
information to basic file system. The basic file system is
responsible for issuing the commands to I/O control in
order to fetch those blocks.
o I/O controls contain the codes by using which it can access
hard disk. These codes are known as device drivers. I/O
controls are also responsible for handling interrupts.

File system organized in many layers :

 I/O Control level –


Device drivers acts as interface between devices and Os,
155

they help to transfer data between disk and main memory.


It takes block number a input and as output it gives low
level hardware specific instruction.
 Basic file system –
It Issues general commands to device driver to read and
write physical blocks on disk.It manages the memory
buffers and caches. A block in buffer can hold the contents
of the disk block and cache stores frequently used file
system metadata.
 File organization Module –
It has information about files, location of files and their
logical and physical blocks.Physical blocks do not match
with logical numbers of logical block numbered from 0 to
N. It also has a free space which tracks unallocated blocks.
 Logical file system –
It manages metadata information about a file i.e includes all
details about a file except the actual contents of file. It also
maintains via file control blocks. File control block (FCB)
has information about a file – owner, size, permissions,
location of file contents.

Advantages :
1. Duplication of code is minimized.
2. Each file system can have its own logical file system.

Disadvantages :
If we access many files at same time then it results in low
performance.
We can implement file system by using two types data
structures :
1. On-disk Structures –
Generally they contain information about total number of
156

disk blocks, free disk blocks, location of them and etc. Below
given are different on-disk structures :
1. Boot Control Block –
It is usually the first block of volume and it contains
information needed to boot an operating system.In UNIX it
is called boot block and in NTFS it is called as partition
boot sector.
2. Volume Control Block –
It has information about a particular partition ex:- free
block count, block size and block pointers etc.In UNIX it
is called super block and in NTFS it is stored in master file
table.
3. Directory Structure –
They store file names and associated inode numbers.In
UNIX, includes file names and associated file names and
in NTFS, it is stored in master file table.
4. Per-File FCB –
It contains details about files and it has a unique identifier
number to allow association with directory entry. In NTFS
it is stored in master file table.
157

2. In-Memory Structure :
They are maintained in main-memory and these are helpful
for file system management for caching. Several in-memory
structures given below :
5. Mount Table –
It contains information about each mounted volume.
6. Directory-Structure cache –
This cache holds the directory information of recently
accessed directories.
7. System wide open-file table –
It contains the copy of FCB of each open file.
8. Per-process open-file table –
It contains information opened by that particular process
and it maps with appropriate system wide open-file.
158

Directory Implementation :
9. Linear List –
It maintains a linear list of filenames with pointers to the
data blocks.It is time-consuming also.To create a new file,
we must first search the directory to be sure that no
existing file has the same name then we add a file at end of
the directory.To delete a file, we search the directory for
the named file and release the space.To reuse the directory
entry either we can mark the entry as unused or we can
attach it to a list of free directories.
10. Hash Table –
The hash table takes a value computed from the file name
and returns a pointer to the file. It decreases the directory
search time. The insertion and deletion process of files is
easy. The major difficulty is hash tables are its generally
fixed size and hash tables are dependent on hash function
on that size.

Following mechanisms can be used to resolve this:


1. Linked scheme: This scheme links two or more index
blocks together for holding the pointers. Every index block
would then contain a pointer or the address to the next index
block.
2. Multilevel index: In this policy, a first level index block is
used to point to the second level index blocks which inturn
points to the disk blocks occupied by the file. This can be
extended to 3 or more levels depending on the maximum
file size.
3. Combined Scheme: In this scheme, a special block called
the Inode (information Node) contains all the information
about the file such as the name, size, authority, etc and the
remaining space of Inode is used to store the Disk Block
addresses which contain the actual file as shown in the
159

image below. The first few of these pointers in Inode point


to the direct blocks i.e the pointers contain the addresses of
the disk blocks that contain data of the file. The next few
pointers point to indirect blocks. Indirect blocks may be
single indirect, double indirect or triple indirect. Single
Indirect block is the disk block that does not contain the
file data but the disk address of the blocks that contain the
file data. Similarly, double indirect blocks do not contain
the file data but the disk address of the blocks that contain
the address of the blocks containing the file data.

Allocation methods in File System:

Allocation Methods
There are various methods which can be used to allocate disk
space to the files. Selection of an appropriate allocation method
will significantly affect the performance and efficiency of the
system. Allocation method provides a way in which the disk
will be utilized and the files will be accessed.
There are following methods which can be used for allocation.
1. Contiguous Allocation.
2. Extents
160

3. Linked Allocation
4. Clustering
5. FAT
6. Indexed Allocation
7. Linked Indexed Allocation
8. Multilevel Indexed Allocation
9. Inode

Contiguous allocation:

If the blocks are allocated to the file in such a way that all the
logical blocks of the file get the contiguous physical block in
the hard disk then such allocation scheme is known as
contiguous allocation.
In the image shown below, there are three files in the directory.
The starting block and the length of each file are mentioned in
the table. We can check in the table that the contiguous blocks
are assigned to each file as per its need.
The directory entry for a file with contiguous allocation
contains
 Address of starting block
 Length of the allocated portion.
161

Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.

Linked Allocation:

Linked List allocation solves all problems of contiguous


allocation. In linked list allocation, each file is considered as
the linked list of disk blocks. However, the disks blocks
allocated to a particular file need not to be contiguous on the
disk. Each disk block allocated to a file contains a pointer
which points to the next disk block allocated to the same file.
162

The directory entry contains a pointer to the starting and the


ending file block. Each block contains a pointer to the next
block occupied by the file.

Advantages
1. There is no external fragmentation with linked allocation.
2. Any free block can be utilized in order to satisfy the file
block requests.
3. File can continue to grow as long as the free blocks are
available.
4. Directory entry will only contain the starting block
address.
Disadvantages
1. Random Access is not provided.
2. Pointers require some space in the disk blocks.
3. Any of the pointers in the linked list must not be broken
otherwise the file will get corrupted.
4. Need to traverse each block.
163

Indexed Allocation:
Instead of maintaining a file allocation table of all the disk
pointers, Indexed allocation scheme stores all the disk pointers
in one of the blocks called as indexed block. Indexed block
doesn't hold the file data, but it holds the pointers to all the disk
blocks allocated to that particular file. Directory entry will only
contain the index block address.

Advantages
1. Supports direct access
2. A bad data block causes the lost of only that block.
Disadvantages
1. A bad index block could cause the lost of entire file.
2. Size of a file depends upon the number of pointers, a index
block can hold.
164

3. Having an index block for a small file is totally wastage.


4. More pointer overhead

Free space management: Bit vector, Linked list,


Grouping, Counting:

Free Space Management


A file system is responsible to allocate the free blocks to the
file therefore it has to keep track of all the free blocks present
in the disk. There are mainly two approaches by using which,
the free blocks in the disk are managed.
1. Bit Vector
In this approach, the free space list is implemented as a bit map
vector. It contains the number of bits where each bit represents
each block.
If the block is empty then the bit is 1 otherwise it is 0. Initially
all the blocks are empty therefore each bit in the bit map vector
contains 1.
LAs the space allocation proceeds, the file system starts
allocating blocks to the files and setting the respective bit to 0.
165

Advantages –
 Simple to understand.

 Finding the first free block is efficient. It requires scanning

the words (a group of 8 bits) in a bitmap for a non-zero


word. (A 0-valued word has all bits 0). The first free block
is then found by scanning for the first 1 bit in the non-zero
word.

The block number can be calculated as:


(number of bits per word) *(number of 0-values words) +
offset of bit first bit 1 in the non-zero word .
For the Figure-1, we scan the bitmap sequentially for the first
non-zero word.
The first group of 8 bits (00001110) constitute a non-zero
166

word since all bits are not 0. After the non-0 word is found,
we look for the first 1 bit. This is the 5th bit of the non-zero
word. So, offset = 5.
Therefore, the first free block number = 8*0+5 = 5.
2. Linked List
It is another approach for free space management. This
approach suggests linking together all the free blocks and
keeping a pointer in the cache which points to the first free
block.
Therefore, all the free blocks on the disks will be linked
together with a pointer. Whenever a block gets allocated, its
previous free block will be linked to its next free block.
167

In Figure-2, the free space list head points to Block 5 which


points to Block 6, the next free block and so on. The last free
block would contain a null pointer indicating the end of free
list.
A drawback of this method is the I/O required for free space
list traversal.

Grouping –

This approach stores the address of the free blocks in the first
free block. The first free block stores the address of some,
say n free blocks. Out of these n blocks, the first n-1 blocks
are actually free and the last block contains the address of
next free n blocks.
An advantage of this approach is that the addresses of a
group of free disk blocks can be found easily.

Counting –

This approach stores the address of the first free disk block
and a number n of free contiguous disk blocks that follow the
first block.
Every entry in the list would contain:
1. Address of first free disk block
2. A number n
For example, in Figure-1, the first entry of the free space
list would be: ([Address of Block 5], 2), because 2
contiguous free blocks follow block 5.
168

CHAPTER-6

DEVICE MANAGEMENT

Disk structure:

Disk Structure in Operating System: The actual physical


details of a modern hard disk may be quite complicated.
Simply, there are one or more surfaces, each of which contains
several tracks, each of which is divided into sectors.

On Disk Data Structures


There are various on disk data structures that are used to
implement a file system. This structure may vary depending
upon the operating system.
1. Boot Control Block
Boot Control Block contains all the information which is
needed to boot an operating system from that volume. It is
called boot block in UNIX file system. In NTFS, it is
called the partition boot sector.
2. Volume Control Block
Volume control block all the information regarding that
volume such as number of blocks, size of each block,
partition table, pointers to free blocks and free FCB
blocks. In UNIX file system, it is known as super block.
In NTFS, this information is stored inside master file
table.
169

3. Directory Structure (per file system)


A directory structure (per file system) contains file names
and pointers to corresponding FCBs. In UNIX, it includes
inode numbers associated to file names.
4. File Control Block
File Control block contains all the details about the file
such as ownership details, permission details, file size,etc.
In UFS, this detail is stored in inode. In NTFS, this
information is stored inside master file table as a relational
database structure. A typical file control block is shown in
the image below.
170

Disk scheduling: FCFS, SSTF, SCAN, C-SCAN,


LOOK, C-LOOK

Disk scheduling is done by operating systems to schedule


I/O requests arriving for the disk. Disk scheduling is also
known as I/O scheduling.
Disk scheduling is important because:

 Multiple I/O requests may arrive by different processes


and only one I/O request can be served at a time by the
disk controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.
 Two or more request may be far from each other so can

result in greater disk arm movement.


 Hard drives are one of the slowest parts of the computer

system and thus need to be accessed in an efficient


manner.
There are many Disk Scheduling Algorithms but before
discussing them let’s have a quick look at some of the
important terms:

 Seek Time:Seek time is the time taken to locate the disk


arm to a specified track where the data is to be read or
write. So the disk scheduling algorithm that gives
minimum average seek time is better.
 Rotational Latency: Rotational Latency is the time taken
by the desired sector of disk to rotate into a position so that
it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the
data. It depends on the rotating speed of the disk and
number of bytes to be transferred.
171

 Disk Access Time: Disk Access Time is:

Disk Access Time = Seek Time +


Rotational Latency +
Transfer Time

 Disk Response Time: Response Time is the average of


time spent by a request waiting to perform its I/O
operation. Average Response time is the response time of
the all requests. Variance Response Time is measure of
how individual request are serviced with respect to average
response time. So the disk scheduling algorithm that gives
minimum variance response time is better.
172

Disk Scheduling Algorithms

1. FCFS: FCFS is the simplest of all the Disk


Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk
queue.Let us understand this with the help of an
example.

Example:
1. Suppose the order of request is-
(82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

1. So, total seek time:


=(82-50)+(170-82)+(170-43)+(140-43)+(140-
24)+(24-16)+(190-16)
173

=642

Advantages:

 Every request gets a fair chance


 No indefinite postponement

Disadvantages:

 Does not try to optimize seek time


 May not provide the best possible service

1. SSTF: In SSTF (Shortest Seek Time First), requests


having shortest seek time are executed first. So, the
seek time of every request is calculated in advance in
the queue and then they are scheduled according to
their calculated seek time. As a result, the request near
the disk arm will get executed first. SSTF is certainly
an improvement over FCFS as it decreases the
average response time and increases the throughput of
system.Let us understand this with the help of an
example.

Example:
1. Suppose the order of request is-
(82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
174

1.
So, total seek time:
1. =(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-
140)+(190-170)
=208

Advantages:

 Average Response Time decreases


 Throughput increases

Disadvantages:

 Overhead to calculate seek time in advance


 Can cause Starvation for a request if it has higher seek
time as compared to incoming requests
 High variance of response time as SSTF favours only
some requests
175

1. SCAN: In SCAN algorithm the disk arm moves into


a particular direction and services the requests coming
in its path and after reaching the end of disk, it
reverses its direction and again services the request
arriving in its path. So, this algorithm works as an
elevator and hence also known as elevator
algorithm. As a result, the requests at the midrange
are serviced more and those arriving behind the disk
arm will have to wait.

Example:
1. Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should
move “towards the larger value”.

1.
Therefore, the seek time is calculated as:
176

1. =(199-50)+(199-16)
=332

Advantages:

 High throughput
 Low variance of response time
 Average response time

Disadvantages:

 Long waiting time for requests for locations just


visited by disk arm

1. CSCAN: In SCAN algorithm, the disk arm again


scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many
requests are waiting at the other end or there may be
zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in
which the disk arm instead of reversing its direction
goes to the other end of the disk and starts servicing the
requests from there. So, the disk arm moves in a circular
fashion and this algorithm is also similar to SCAN
algorithm and hence it is known as C-SCAN (Circular
SCAN).
177

Example:
Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is at
50, and it is also given that the disk arm should
move “towards the larger value”.

Seek time is calculated as:


=(199-50)+(199-0)+(43-0)
=391
Advantages:

 Provides more uniform wait time compared to SCAN


178

1. LOOK: It is similar to the SCAN disk scheduling


algorithm except for the difference that the disk arm
in spite of going to the end of the disk goes only to the
last request to be serviced in front of the head and then
reverses its direction from there only. Thus it prevents
the extra delay which occurred due to unnecessary
traversal to the end of the disk.

Example:
1. Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should
move “towards the larger value”.

1.
So, the seek time is calculated as:
179

1. =(190-50)+(190-16)
=314

1. CLOOK: As LOOK is similar to SCAN algorithm, in


similar way, CLOOK is similar to CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in
spite of going to the end goes only to the last request
to be serviced in front of the head and then from there
goes to the other end’s last request. Thus, it also
prevents the extra delay which occurred due to
unnecessary traversal to the end of the disk.

Example:
1. Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should
move “towards the larger value”
180

1.
So, the seek time is calculated as:
1. =(190-50)+(190-16)+(43-16)
=341
2. RSS– It stands for random scheduling and just like its
name it is nature. It is used in situations where
scheduling involves random attributes such as random
processing time, random due dates, random weights,
and stochastic machine breakdowns this algorithm sits
perfect. Which is why it is usually used for and
analysis and simulation.
3. LIFO– In LIFO (Last In, First Out) algorithm, newest
jobs are serviced before the existing ones i.e. in order
of requests that get serviced the job that is newest or
last entered is serviced first and then the rest in the
same order.
181

Advantages
 Maximizes locality and resource utilization

 Can seem a little unfair to other requests and if new

requests keep coming in, it cause starvation to the


old and existing ones.
4. N-STEP SCAN – It is also known as N-STEP LOOK
algorithm. In this a buffer is created for N requests.
All requests belonging to a buffer will be serviced in
one go. Also once the buffer is full no new requests
are kept in this buffer and are sent to another one.
Now, when these N requests are serviced, the time
comes for another top N requests and this way all get
requests get a guaranteed service
Advantages
 It eliminates starvation of requests completely

5. FSCAN– This algorithm uses two sub-queues.


During the scan all requests in the first queue are
serviced and the new incoming requests are added to
the second queue. All new requests are kept on halt
until the existing requests in the first queue are
serviced.
Advantages
 FSCAN along with N-Step-SCAN prevents “arm

stickiness” (phenomena in I/O scheduling where the


scheduling algorithm continues to service requests
at or near the current sector and thus prevents any
seeking)
Each algorithm is unique in its own way. Overall
Performance depends on the number and type of
requests.

You might also like