MDU BCA- Operating System
MDU BCA- Operating System
CONTACT NO-7827646303
2
Banker’sAlgorithm
SECTION-III
Memory Management:
Logical versus Physical address space
Swapping
Contiguous allocation
Paging
Segmentation
Virtual Memory:
Demand Paging
Performance of Demand Paging
Page Replacement
Page Replacement Algorithm
Thrasing
SECTION-IV
File Management:
File System Structure
Allocation methods: Contiguous allocation,
Linked allocation, Indexed allocation
Free space management: Bit vector, Linked
list, Grouping, Counting
Device Management:
Disk structure
Disk scheduling: FCFS, SSTF, SCAN, C-
SCAN, LOOK, C-LOOK
SECTION-I
4
Compiler –
9
Loader –
A Loader is a routine that loads an object program and
prepares it for execution. There are various loading schemes:
absolute, relocating, and direct-linking. In general, the loader
must load, relocate and link the object program. The loader is
a program that places programs into memory and prepares
them for execution. In a simple loading scheme, the
assembler outputs the machine language translation of a
program on a secondary device and a loader places it in the
core. The loader places into memory the machine language
version of the user’s program and transfers control to it.
Since the loader program is much smaller than the assembler,
those make more core available to the user’s program.
Electronic device
Generation Year used Types of OS Device
First 1945-55 Vacuum Tubes Plug Boards
Second 1955-65 Transistors Batch Systems
Integrated
Third 1965-80 Circuits(IC) Multiprogramming
10
deadlines.
Examples of Operating System are –
Windows (GUI based, PC)
GNU/Linux (Personal, Workstations, ISP, File and print
server, Three-tier client/Server)
macOS (Macintosh), used for Apple’s personal computers
and workstations (MacBook, iMac).
Android (Google’s Operating System for
smartphones/tablets/smartwatches)
iOS (Apple’s OS for iPhone, iPad, and iPod Touch)
Program Execution
Memory management Virtual Memory Multitasking
Handling I/O operations
Manipulation of the file system
Error Detection and handling
Resource allocation
Information and Resource Protection
Introduction to Kernel
Features of Kernel
Low-level scheduling of processes
Inter-process communication
Process synchronization
Context switching
Types of Kernel
There are many types of kernels that exists, but among them,
the two most popular kernels are:
1. Monolithic
A monolithic kernel is a single code or block of the program. It
provides all the required services offered by the operating
14
Multitasking:
Operating System manages memory and allows multiple
programs to run in their own space and even communicate
with each other through shared memory. Multitasking gives
users a good experience as they can perform several tasks
on a computer at a time.
o Communication
Program execution
To execute a program, several tasks need to be performed. Both
the instructions and data must be loaded into the main memory.
In addition, input-output devices and files should be initialized,
and other resources must be prepared. The Operating structures
handle these kinds of tasks. The user now no longer should fear
the reminiscence allocation or multitasking or anything.
Control Input/output devices
As there are numerous types of I/O devices within the computer
system, and each I/O device calls for its own precise set of
instructions for the operation. The Operating System hides that
info with the aid of presenting a uniform interface. Thus, it is
convenient for programmers to access such devices easily.
Program Creation
The Operating system offers the structures and tools, including
editors and debuggers, to help the programmer create, modify,
and debugging programs.
Error Detection and Response
An Error in a device may also cause malfunctioning of the
entire device. These include hardware and software errors such
as device failure, memory error, division by zero, attempts to
access forbidden memory locations, etc. To avoid error, the
operating system monitors the system for detecting errors and
takes suitable action with at least impact on running
applications.
19
Disadvantages
There are various disadvantages of the Batch Operating
System. Some of them are as follows:
1. When a job fails once, it must be scheduled to be
completed, and it may take a long time to complete the
task.
2. Computer operators must have full knowledge of batch
systems.
3. The batch system is quite difficult to debug.
4. The computer system and the user have no direct
interaction.
5. If a job enters an infinite loop, other jobs must wait for an
unknown period of time.
Time-Sharing Operating System
Time-sharing is a technique that enables many people located
at various terminals to use a particular computer system
simultaneously. Time-Sharing is the logical extension of
multiprogramming. In this time-sharing Operating system,
many processes are allocated with computer resources in
respective time slots. In this, the processor's time is shared with
multiple users. That's why it is called a time-sharing operating
system. It has a fixed time slice for the different processes. Its
main purpose is interactive response time.
29
Here the system does not take Here the system works for the
some time to work on different same or less time on each
processes. process.
Personal Computer:
What is a PC?
messaging on computers.
Communication is also possible between humans and
machines as well as between humans over a computer.
For example, you can turn off your lights while you are
outside your house. Also, you can set your DVR to
record a movie if it is connected to your computer via a
wireless Internet connection.
o Information: A personal computer helps people to have
information constantly at their fingertips by connecting
the Internet. On the computer, a user can type a desirable
website URL into the search bar of the browser and visit
this site in just a few seconds. There are several user-
submitted communities, books, and encyclopedias,
available online that are designed to offer specific types of
information, such as walkthroughs, video game.
Additionally, you can use offline play education games or
digital encyclopedia software.
Disadvantages of Personal Computers
o Physical Side Effects: The use of a computer frequently
Distributed Systems:
currently.
Security possess a problem due to easy access to data as the
i.e., if there is a lag in the network then the user will face a
problem accessing data.
E-Commerce websites.
Information Society: Search Engines, Wikipedia, Social
SAP.
Entertainment: Online Gaming, Music, youtube.
Education: E-learning.
S.
No Parallel System Distributed System
In Distributed Systems,
These systems are each processor has its own
4. multiprocessor systems. memory.
48
S.
No Parallel System Distributed System
These systems
communicate with one
another through various
These systems have close communication lines, such
communication with as high-speed buses or
6. more than one processor. telephone lines.
Real-Time Systems:
o MTS
o Lynx
o QNX
o VxWorks etc.
Applications of Real-time operating system (RTOS):
RTOS is used in real-time applications that must work within
specific deadlines. Following are the common areas of
applications of Real-time operating systems are given below.
o Real-time running structures are used inside the Radar
gadget.
o Real-time running structures are utilized in Missile
guidance.
o Real-time running structures are utilized in on line
inventory trading.
o Real-time running structures are used inside the cell phone
switching gadget.
o Real-time running structures are utilized by Air site
visitors to manipulate structures.
o Real-time running structures are used in Medical Imaging
Systems.
o Real-time running structures are used inside the Fuel
injection gadget.
o Real-time running structures are used inside the Traffic
manipulate gadget.
o Real-time running structures are utilized in Autopilot
travel simulators.
Types of Real-time operating system
Following are the three types of RTOS systems are:
51
CHAPTER-2
PROCESS MANAGEMENT
Process concept:
54
Operation on processes:
background processes.
A user may request to create a new process.
executing.
Batch system takes initiation of a batch job.
Cooperating Processes:
Threads:
Interprocess Communication:
Spinlock:-
Spinlock is a type of lock as its name implies. The processes
are trying to acquire the spinlock waits or stays in a loop while
checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process
does not perform any functional operation (or task).
Approaches to Interprocess Communication
We will now discuss some different approaches to inter-process
communication which are as follows:
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
To understand them in more detail, we will discuss each of
them individually.
Pipe:-
The pipe is a type of data channel that is unidirectional in
nature. It means that the data in this type of data channel can be
moved in only a single direction at a time. Still, one can use
two-channel of this type, so that he can able to send and receive
data in two processes. Typically, it uses the standard methods
for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems
as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or
accessed by multiple processes simultaneously. It is primarily
used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and
Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and
write the data to the message queue. In the message queue, the
messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue
is very helpful in inter-process communication and used by all
operating systems.
72
Message Passing:-
It is a type of mechanism that allows processes to synchronize
and communicate with each other. However, by using the
message passing, the processes can communicate with each
other without restoring the hared variables.
Usually, the inter-process communication mechanism provides
two operations that are as follows:
o send (message)
o received (message)
Note: The size of the message can be fixed or variable.
Direct Communication:-
In this type of communication process, usually, a link is created
or established between two communicating processes.
However, in every pair of communicating processes, only one
link can exist.
73
Indirect Communication
Indirect communication can only exist or be established when
processes share a common mailbox, and each pair of these
processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated
processes. It can also be considered as full-duplex, which
means that one process can communicate with another process
and vice versa.
Some other different approaches
o Socket:-
Therefore, they are not used for sending data but for remote
commands between multiple processes.
Usually, they are not used to send the data but to remote
commands in between several processes.
Why we need interprocess communication?
There are numerous reasons to use inter-process
communication for sharing the data. Here are some of the most
important reasons that are given below:
o It helps to speedup modularity
o Computational
o Privilege separation
o Convenience
o Helps operating system to communicate with each other
and synchronize their actions as well.
75
SECTION-II
CPU Scheduling
Scheduling criteria:
Scheduling Algorithm:
2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a
time quantum (slice). All the processes will get executed in the
cyclic way. Each of the process will get the CPU for a small
amount of time (called time quantum) and then get back to the
ready queue to wait for its next turn. It is a preemptive type of
scheduling.
3. Shortest Job First
The job with the shortest burst time will get the CPU first. The
lesser the burst time, the sooner will the process get the CPU.
It is the non-preemptive type of scheduling.
4. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS
schedules the Job according to the remaining time of the
execution.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the
processes. The higher the priority, the sooner will the process
get the CPU. If the priority of the two processes is same then
they will be scheduled according to their arrival time.
6. Highest Response Ratio Next
In this scheduling Algorithm, the process with highest response
ratio will be scheduled next. This reduces the starvation in the
system.
81
FCFS Algorithm:
0 0 2 2 2 0
1 1 6 8 7 1
2 2 4 12 10 6
3 3 9 21 18 9
4 6 12 33 29 17
(Gantt chart)
Algorithm:
Sort all the processes according to the arrival time.
Then select that process that has minimum arrival time and
1 1 7 8 7 0
2 3 3 13 10 7
3 6 2 10 4 2
4 7 10 31 24 14
5 9 8 21 12 4
Advantages
1. It can be actually implementable in the system because it
is not depending on the burst time.
2. It doesn't suffer from the problem of starvation or convoy
effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time
in the system.
2. The lower the time quantum, the higher the context
switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult
task in the system.
Examples to show working of Round Robin Scheduling
Algorithm:
P1 5 ms 0 ms
P2 4 ms 1 ms
P3 2 ms 2 ms
89
P4 1 ms 4 ms
5.
Here, every process executes for 2 seconds (Time
0- P2,
2ms P1 0ms P3 P1 2ms 5ms 3ms
At time = 2,
The processes P2 and P3 arrives in the ready queue and P2
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
At time = 4,
The process P4 arrives in the ready queue,
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
At time = 6,
Process P3 completes its execution
91
At time = 8,
Process P4 starts executing, it will not execute for Time
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
At time = 9,
Process P4 completes its execution
At time = 11,
Process P2 completes its execution.
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
11-
12ms P1 0ms P1 1ms 1ms 0ms
At time = 12,
Process P1 completes its execution.
below:
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
0- P2, 5m
2ms P1 0ms P3 P1 2ms s 3ms
2- P3, 4m
4ms P2 1ms P1 P2 2ms s 2ms
94
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
3m
P1 0ms 2ms s 1ms
6- P4,
8ms P2 1ms P2 P1 0ms 4ms 2ms
8- P2, 1m
9ms P4 4ms P1 P4 1ms s 0ms
Initi
al
Arri Rea Runn Bur Remai
Time val dy ing Execut st ning
Insta Proc Tim Que Queu ion Tim Burst
nce ess e ue e Time e Time
9- 2m
11ms P2 1ms 2ms s 0ms
11- 1m
12ms P1 0ms P1 1ms s 0ms
Now, lets calculate average waiting time and turn around time:
Processes AT BT CT TAT WT
P1 0 5 12 12-0 = 12 12-5 = 7
P2 1 4 11 11-1 = 10 10-4 = 6
P3 2 2 6 6-2 = 4 4-2 = 2
P4 4 1 9 9-4 = 5 5-1 = 4
Now,
Average Turn around time = (12 + 10 + 4 + 5)/4 = 31/4 =
7.7
Average waiting time = (7 + 6 + 2 + 4)/4 = 19/4 = 4.7
P1 10 ms 0 ms
P2 5 ms 0 ms
P3 8 ms 0 ms
Now, lets calculate average waiting time and turn around time:
Processes AT BT CT TAT WT
P1 0 10 23 23-0 = 23 23-10 = 13
P2 0 5 15 15-0 = 15 15-5 = 10
Queue Algorithms:
Disadvantages
1. There is a risk of starvation for lower priority processes.
2. It is rigid in nature.
Example
Let's take an example of a multilevel queue-scheduling
algorithm with five queues to understand how this scheduling
works:
1. System process
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
Every queue would have an absolute priority over the low-
priority queues. No process may execute until the high-priority
queues are empty. In the above instance, no other process may
execute until and unless the queues for system, interactive, and
100
P1 0 4 1
P2 0 3 1
P3 0 8 2
P4 10 5 4
Working:
1. Both queues have been processed at the start.
Therefore, queue 1 (P1, P2) runs first (due to greater
priority) in a round-robin way and finishes after 7 units.
2. The process in queue 2 (Process P3) starts running (since
there is no process in queue 1), but while it is executing,
P4 enters queue 1 and interrupts P3, and then P3 takes the
CPU and finishes its execution.
Multilevel Feedback Scheduling
102
CHAPTER-4
DEADLOCKS
Deadlock characterization:
Deadlock Characterization
1. Deadlock Prerequisites
2. Systems Resource Allocation Graph
Deadlock Prerequisites
i) Mutual Exclusion
In a multiprogramming environment, there may be several
processes requesting the same resource at a time. The mutual
exclusion condition, allow only a single process to access the
resource at a time. While the other processes requesting the
same resource must wait and delay their execution until it has
been released. The mutual exclusion condition prevents two or
more processes to access the same resource at a time.
107
iii) No Preemption
A process acquiring a resource, cannot be preempted in
between, to release the acquired resource. Instead, the process
must voluntarily release the resource it has acquired when the
task of the process has been completed.
iv) Circular Wait
The processes must be waiting in a circular pattern to acquire
the resource. This is similar to hold and waits the only
difference is that the processes are waiting in a circular pattern.
Let us discuss this with an example there are five processes i.e.
P0, P1, P2, P3, P4. Now the process P0 is waiting to acquire the
process held by the P1, further the process P 1 is waiting to
acquire the resource held by P 2, …., process P4 is waiting to
acquire the resource held by P 0.
The figure below shows that the process P 1 has requested for
the instance of resource R1 is already holding the instance of
resource R2. The process P2 has requested for the instance of
resource R3 and is already holding the instances of resource
R1 and R3. The process P3 has not requested for any resource
instance but is holding the instance for resource R3.
Banker’sAlgorithm:
loan then the bank first subtracts the loan amount from the total
money that bank has and if the remaining amount is greater
than S then only the loan is sanctioned. It is done because if
all the account holders comes to withdraw their money then
the bank can easily do it.
Advantages
Following are the essential characteristics of the Banker's
algorithm:
1. It contains various resources that meet the requirements of
each process.
2. Each process should provide information to the operating
system for upcoming resource requests, the number of
resources, and how long the resources will be held.
3. It helps the operating system manage and control process
requests for each type of resource in the computer system.
4. The algorithm has a Max resource attribute that represents
indicates each process can hold the maximum number of
resources in a system.
Disadvantages
1. It requires a fixed number of processes, and no additional
processes can be started in the system while executing the
process.
2. The algorithm does no longer allows the processes to
exchange its maximum needs while processing its tasks.
3. Each process has to know and state their maximum
resource requirement in advance for the system.
4. The number of resource requests can be granted in a finite
time, but the time limit for allocating the resources is one
year.
113
work.
Nedd[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a
Boolean value (true/false) indicating whether the process
has been allocated to the requested resources, and all
resources have been released after finishing its task.
The Banker's Algorithm is the combination of the safety
algorithm and the resource request algorithm to control the
processes and avoid deadlock in a system:
115
SECTION-III
MEMORY MANAGEMENT
Logical versus Physical address space:
LOGICAL PHYSICAL
Parameter ADDRESS ADDRESS
location in a memory
Basic generated by CPU unit
Logical Address
Space is set of all Physical Address is set
logical addresses of all physical
generated by CPU in addresses mapped to
Address reference to a the corresponding
Space program. logical addresses.
generated by the
Generation CPU Computed by MMU
LOGICAL PHYSICAL
Parameter ADDRESS ADDRESS
memory). Note that users deal only with logical addresses. The
MMU translates the logical address. The output of this process
is the appropriate physical address of the data in RAM. An
address binding can be done in three different ways:
1. Compile Time: An absolute address can be generated if
you know where a process will reside in memory at
compile time. That is, a physical address is generated in
the program executable during compilation. Loading such
an executable into memory is very fast. But if another
process occupies the generated address space, then the
program crashes, and it becomes necessary to recompile
the program to use virtual address space.
2. Load Time: If it is not known at the compile time where
the process will reside, then relocated addresses will be
generated. The loader translates the relocated address to
an absolute address. The base address of the process in the
main memory is added to all logical addresses by the
loader to generate the absolute address. If the base address
of the process changes, then we need to reload the process
again.
3. Execution Time:The instructions are already loaded into
memory and are processed by the CPU. Additional
memory may be allocated or reallocated at this time. This
process is used if the process can be moved from one
memory to another during execution (dynamic linking
done during load or run time). e.g., - Compaction.
What is Memory Management Unit (MMU)
The run-time mapping between the virtual and physical
addresses is done by a hardware device known as MMU. The
operating system will handle the processes and move the
processes between disk and memory in memory management.
120
Contiguous allocation:
If the blocks are allocated to the file in such a way that all the
logical blocks of the file get the contiguous physical block in
the hard disk then such allocation scheme is known as
contiguous allocation.
In the image shown below, there are three files in the directory.
The starting block and the length of each file are mentioned in
the table. We can check in the table that the contiguous blocks
are assigned to each file as per its need.
124
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.
Difference between Contiguous and Noncontiguous
Memory Allocation
Contiguous memory
allocation allocates Non-Contiguous memory
consecutive blocks of allocation allocates
memory to a separate blocks of
1. file/process. memory to a file/process.
Overhead is minimum
as not much address
translations are there More Overheads are there
while executing a as there are more address
4. process. translations.
Both Internal
fragmentation and
external fragmentation Only External
occurs in Contiguous fragmentation occurs in
memory allocation Non-Contiguous memory
5. method. allocation method.
multi-partition
allocation.
It is of five types:
1. Paging
It is of two types: 2. Multilevel Paging
1. Fixed(or static) 3. Inverted Paging
partitioning 4. Segmentation
9. 2. Dynamic partitioning 5. Segmentated Paging
It could be visualized
and implemented using It could be implemented
10. Arrays. using Linked Lists.
129
Paging:
Example
Let us consider the main memory size 16 Kb and Frame size is
1 KB therefore the main memory will be divided into the
collection of 16 frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of
4 KB each. Each process is divided into pages of 1 KB each so
that one page can be stored in one frame.
Initially, all the frames are empty therefore pages of the
processes will get stored in the contiguous way.
Frames, pages and the mapping between the two is shown in
the image below.
131
Segmentation:
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page
table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable
sized partition.
3. Costly memory management algorithms.
138
CHAPTER-2
VIRTUAL MEMORY
Demand paging:
The CPU has to access the missed page from the secondary
memory. If the number of page fault is very high then the
effective access time of the system will become very high.
What is Thrashing?
If the number of page faults is equal to the number of referred
pages or the number of page faults are so high so that the CPU
remains busy in just reading the pages from the secondary
memory then the effective access time will be the time taken
by the CPU to read one word from the secondary memory and
it will be so high. The concept is called thrashing.
If the page fault rate is PF %, the time taken in getting a page
from the secondary memory and again restarting is S (service
time) and the memory access time is ma then the effective
access time can be given as;
1. EAT = PF X S + (1 - PF) X (ma)
Performance of Paging :
performance of paging.
The TLB is a hardware device implemented using
associative registers.
TLB access time will be very less compared to the main
Consider the TLB access time is ‘c’. And the TLB hit ratio is
‘x’ then the Evaluating Expression for the performance of
paging is as follows.
Effective Memory Access Time (E.M.A.T) with TLB
= x(c+m) + (1-x) (c + 2 m)
142
Page replacement:
Thrashing:
If D is the total demand for frames and WSSi is the working set
size for process i,
D = ⅀ WSSi
Now, if 'm' is the number of frames available in the memory,
there are two possibilities:
o D>m, i.e., total demand exceeds the number of frames,
then thrashing will occur as some processes would not get
enough frames.
o D<=m, then there would be no thrashing.
If there are enough extra frames, then some more processes can
be loaded into the memory. On the other hand, if the summation
of working set sizes exceeds the frames' availability, some of
the processes have to be suspended (swapped out of memory).
This technique prevents thrashing along with ensuring the
highest degree of multiprogramming possible. Thus, it
optimizes CPU utilization.
3. Page Fault Frequency
A more direct approach to handle thrashing is the one that uses
the Page-Fault Frequency concept.
150
SECTION-IV
File Management
from the hard disk. Hard disk is divided into various tracks
and sectors. Therefore, in order to store and retrieve the
files, the logical blocks need to be mapped to physical
blocks. This mapping is done by File organization module.
It is also responsible for free space management.
o Once File organization module decided which physical
block the application program needs, it passes this
information to basic file system. The basic file system is
responsible for issuing the commands to I/O control in
order to fetch those blocks.
o I/O controls contain the codes by using which it can access
hard disk. These codes are known as device drivers. I/O
controls are also responsible for handling interrupts.
Advantages :
1. Duplication of code is minimized.
2. Each file system can have its own logical file system.
Disadvantages :
If we access many files at same time then it results in low
performance.
We can implement file system by using two types data
structures :
1. On-disk Structures –
Generally they contain information about total number of
156
disk blocks, free disk blocks, location of them and etc. Below
given are different on-disk structures :
1. Boot Control Block –
It is usually the first block of volume and it contains
information needed to boot an operating system.In UNIX it
is called boot block and in NTFS it is called as partition
boot sector.
2. Volume Control Block –
It has information about a particular partition ex:- free
block count, block size and block pointers etc.In UNIX it
is called super block and in NTFS it is stored in master file
table.
3. Directory Structure –
They store file names and associated inode numbers.In
UNIX, includes file names and associated file names and
in NTFS, it is stored in master file table.
4. Per-File FCB –
It contains details about files and it has a unique identifier
number to allow association with directory entry. In NTFS
it is stored in master file table.
157
2. In-Memory Structure :
They are maintained in main-memory and these are helpful
for file system management for caching. Several in-memory
structures given below :
5. Mount Table –
It contains information about each mounted volume.
6. Directory-Structure cache –
This cache holds the directory information of recently
accessed directories.
7. System wide open-file table –
It contains the copy of FCB of each open file.
8. Per-process open-file table –
It contains information opened by that particular process
and it maps with appropriate system wide open-file.
158
Directory Implementation :
9. Linear List –
It maintains a linear list of filenames with pointers to the
data blocks.It is time-consuming also.To create a new file,
we must first search the directory to be sure that no
existing file has the same name then we add a file at end of
the directory.To delete a file, we search the directory for
the named file and release the space.To reuse the directory
entry either we can mark the entry as unused or we can
attach it to a list of free directories.
10. Hash Table –
The hash table takes a value computed from the file name
and returns a pointer to the file. It decreases the directory
search time. The insertion and deletion process of files is
easy. The major difficulty is hash tables are its generally
fixed size and hash tables are dependent on hash function
on that size.
Allocation Methods
There are various methods which can be used to allocate disk
space to the files. Selection of an appropriate allocation method
will significantly affect the performance and efficiency of the
system. Allocation method provides a way in which the disk
will be utilized and the files will be accessed.
There are following methods which can be used for allocation.
1. Contiguous Allocation.
2. Extents
160
3. Linked Allocation
4. Clustering
5. FAT
6. Indexed Allocation
7. Linked Indexed Allocation
8. Multilevel Indexed Allocation
9. Inode
Contiguous allocation:
If the blocks are allocated to the file in such a way that all the
logical blocks of the file get the contiguous physical block in
the hard disk then such allocation scheme is known as
contiguous allocation.
In the image shown below, there are three files in the directory.
The starting block and the length of each file are mentioned in
the table. We can check in the table that the contiguous blocks
are assigned to each file as per its need.
The directory entry for a file with contiguous allocation
contains
Address of starting block
Length of the allocated portion.
161
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.
Linked Allocation:
Advantages
1. There is no external fragmentation with linked allocation.
2. Any free block can be utilized in order to satisfy the file
block requests.
3. File can continue to grow as long as the free blocks are
available.
4. Directory entry will only contain the starting block
address.
Disadvantages
1. Random Access is not provided.
2. Pointers require some space in the disk blocks.
3. Any of the pointers in the linked list must not be broken
otherwise the file will get corrupted.
4. Need to traverse each block.
163
Indexed Allocation:
Instead of maintaining a file allocation table of all the disk
pointers, Indexed allocation scheme stores all the disk pointers
in one of the blocks called as indexed block. Indexed block
doesn't hold the file data, but it holds the pointers to all the disk
blocks allocated to that particular file. Directory entry will only
contain the index block address.
Advantages
1. Supports direct access
2. A bad data block causes the lost of only that block.
Disadvantages
1. A bad index block could cause the lost of entire file.
2. Size of a file depends upon the number of pointers, a index
block can hold.
164
Advantages –
Simple to understand.
word since all bits are not 0. After the non-0 word is found,
we look for the first 1 bit. This is the 5th bit of the non-zero
word. So, offset = 5.
Therefore, the first free block number = 8*0+5 = 5.
2. Linked List
It is another approach for free space management. This
approach suggests linking together all the free blocks and
keeping a pointer in the cache which points to the first free
block.
Therefore, all the free blocks on the disks will be linked
together with a pointer. Whenever a block gets allocated, its
previous free block will be linked to its next free block.
167
Grouping –
This approach stores the address of the free blocks in the first
free block. The first free block stores the address of some,
say n free blocks. Out of these n blocks, the first n-1 blocks
are actually free and the last block contains the address of
next free n blocks.
An advantage of this approach is that the addresses of a
group of free disk blocks can be found easily.
Counting –
This approach stores the address of the first free disk block
and a number n of free contiguous disk blocks that follow the
first block.
Every entry in the list would contain:
1. Address of first free disk block
2. A number n
For example, in Figure-1, the first entry of the free space
list would be: ([Address of Block 5], 2), because 2
contiguous free blocks follow block 5.
168
CHAPTER-6
DEVICE MANAGEMENT
Disk structure:
Example:
1. Suppose the order of request is-
(82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
=642
Advantages:
Disadvantages:
Example:
1. Suppose the order of request is-
(82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
174
1.
So, total seek time:
1. =(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-
140)+(190-170)
=208
Advantages:
Disadvantages:
Example:
1. Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should
move “towards the larger value”.
1.
Therefore, the seek time is calculated as:
176
1. =(199-50)+(199-16)
=332
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Example:
Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is at
50, and it is also given that the disk arm should
move “towards the larger value”.
Example:
1. Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should
move “towards the larger value”.
1.
So, the seek time is calculated as:
179
1. =(190-50)+(190-16)
=314
Example:
1. Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should
move “towards the larger value”
180
1.
So, the seek time is calculated as:
1. =(190-50)+(190-16)+(43-16)
=341
2. RSS– It stands for random scheduling and just like its
name it is nature. It is used in situations where
scheduling involves random attributes such as random
processing time, random due dates, random weights,
and stochastic machine breakdowns this algorithm sits
perfect. Which is why it is usually used for and
analysis and simulation.
3. LIFO– In LIFO (Last In, First Out) algorithm, newest
jobs are serviced before the existing ones i.e. in order
of requests that get serviced the job that is newest or
last entered is serviced first and then the rest in the
same order.
181
Advantages
Maximizes locality and resource utilization