0% found this document useful (0 votes)
2 views

OS Unit - III

Chapter 3 of the document discusses memory management in operating systems, detailing how the OS allocates, deallocates, and optimizes memory usage for applications. It covers concepts like logical vs. physical address space, memory allocation schemes, and the importance of swapping processes between main and secondary memory. Additionally, it explains dynamic loading and linking as techniques to improve memory efficiency and program execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

OS Unit - III

Chapter 3 of the document discusses memory management in operating systems, detailing how the OS allocates, deallocates, and optimizes memory usage for applications. It covers concepts like logical vs. physical address space, memory allocation schemes, and the importance of swapping processes between main and secondary memory. Additionally, it explains dynamic loading and linking as techniques to improve memory efficiency and program execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Operating System

CHAPTER 3
MEMORY MANAGEMENT

INTRODUCTION
The operating system manages the resources of the computer, controls
application launches, and performs tasks such as data protection and system
administration. The resource that the operating system uses the most is memory.
Memory is a storage area on the computer that contains the instructions and data
that the computer uses to run the applications.
When the applications or the operating system need more memory than is
available on the computer, the system must swap the current contents of the
memory space with the contents of the memory space that is being requested. In
the same way, different situations need different memory management techniques.
Some cases call for the use of paging, while others may require the use of an
on-disk cache. Ultimately, deciding which memory management technique to use is
a matter of optimizing the user interface for the available hardware and software.
Memory management is allocating, freeing, and re-organizing memory in a
computer system to optimize the available memory or to make more memory
available. It keeps track of every memory location (if it is free or occupied).

Memory Allocation Schemes


Contiguous Memory Management Schemes: Contiguous memory allocation
means assigning continuous blocks of memory to the process. The best example
is Array.
Non-contiguous Memory Management Schemes: The program is divided into
blocks (fixed size or variable size) and loaded at different portions of memory. That
means program blocks are not stored adjacent to each other.
Memory management in OS is a technique of controlling and managing the
functionality of Random-access memory (primary memory). It is used for achieving
better concurrency, system performance, and memory utilization. Memory
management moves processes from primary memory to secondary memory and vice
versa. It also keeps track of available memory, memory allocation, and unallocated.

Page 71 of 159
Operating System

Use Memory Management In OS:


• Memory management keeps track of the status of each memory location,
whether it is allocated or free.
• Memory management enables computer systems to run programs that require
more main memory than the amount of free main memory available on the
system. This is achieved by moving data between primary and secondary
memory.
• Memory management addresses the system’s primary memory by providing
abstractions such that the programs running on the system perceive a large
memory is allocated to them.
• It is the job of memory management to protect the memory allocated to all the
processes from being corrupted by other processes. If this is not done, the
computer may exhibit unexpected/faulty behaviour.
• Memory management enables sharing of memory spaces among processes, with
the help of which, multiple programs can reside at the same memory location
(although only one at a time).

LOGICAL VS. PHYSICAL ADDRESS SPACE


Logical and physical addresses are important terms. The logical address is
rendered by the CPU while a program is executed, whereas the physical address is
directed to a location in the memory unit.
Physical addresses are specific to the hardware architecture and memory
layout of a specific computer system. They are not portable between systems or
even architectures. Logical addresses, in turn, may be portable and adapted for use
on many different systems or architectures, provided that the translation
mechanisms of those addresses are compatible.

Physical Address Space: Physical addresses are addresses that specify actual
(real) physical locations in memory. It is a real memory location where the data is
stored. Hardware, such as the CPU and memory controller, can directly access
corresponding memory locations with physical addresses, and translation or
mapping is not involved.
Physical addresses are low-level addressing modes that refer to hardware
architecture and point to a specific computer's memory layout. The hardware uses
this mode directly to access memory locations and communicate with devices.

Page 72 of 159
Operating System

Physical addressing allows hardware to access memory locations directly. It points


exactly to where in the memory data is being written or read.
It is the map of the fixed and predetermined hardware design configuration
and memory, relating the physical addresses to the memory locations. The size of
the physical addresses is directly proportional to the quantity of physical memory
in the system. In lay terms, the size of the physical address space shall be
determined according to the total amount of memory installed.
Physical addresses cannot be moved from one system or architecture to
another since they are directly related and associated with a computer system's
hardware and memory layout. Physical addresses are more effective in memory
accessing than logical addresses. This justifies that overhead from translations and
mappings of the address is avoided.

Logical Address Space: The logical addresses are the virtual addresses of a
CPU generated at run time. These do not exist in the memories as physical
addresses; they act as pointers for the CPU to access real memory locations.
Logical addresses are addresses used by software programs and operating
systems to simplify memory management and provide a more flexible and abstract
way of accessing memory or devices. Logical addresses are part of the virtual
memory abstraction, which allows programs to operate in a larger logical address
space than the available physical memory.
A logical address must be mapped or translated to its corresponding
physical addresses before the hardware can use it. This translation is typically
performed by a hardware component called the Memory Management Unit (MMU).
Using logical addresses allows for memory protection mechanisms, where different
programs or processes are isolated from each other's memory spaces, enhancing
security and stability.
Logical addresses are generally more portable and can be used across
different systems or architectures as long as the address translation mechanisms
are compatible.
The abstraction provided by logical addresses enables features like demand
paging, swapping, and shared memory, which are crucial for efficient memory
management and resource utilization.

Tabular Comparison Between Physical Address and Logical


Address: Here's a tabular comparison of the key differences between physical
addresses and logical addresses:

Benchmark Physical Address Logical Address

A virtual or symbolic
Represents the actual
representation of memory
Representation physical location of data in
locations, used by software
memory or devices
programs
Generated based on the
Generated by the CPU while a
Generation hardware architecture and
program is running
memory configuration

Page 73 of 159
Operating System

Must be translated or mapped to


Used directly by hardware
its corresponding physical address
Address to access memory
by the Memory Management Unit
Translation locations, no translation
(MMU) before being used by the
required
hardware
Limited by the amount of Can be part of a larger virtual
Address Space installed physical memory address space, exceeding the
in the system available physical memory
Tied to a specific hardware More portable across different
architecture and memory systems and architectures, as long
Portability
layout, not portable across as the address translation
systems mechanisms are compatible
Directly accessed by Used by software programs and
hardware components for operating systems for memory
Purpose
low-level operations and management, memory protection,
efficient data transfer. and efficient resource sharing
Abstraction Low-level, hardware- Higher-level, software-friendly
Level specific addresses addresses
No mapping is required, Mapped or translated to physical
Mapping
directly used by hardware addresses by the MMU
Memory Provides direct access to Provides an abstraction layer over
Access physical memory locations physical memory access
Enables features like virtual
Memory
Limited memory memory, demand
Management
management features paging, swapping, and shared
Features
memory

SWAPPING
Swapping in OS is one of those schemes which fulfil the goal of maximum
utilization of CPU and memory management by swapping in and swapping out
processes from the main memory. Swap-in removes the process from the hard drive
(secondary memory) and swap-out removes the process from RAM (main memory).
Let's suppose several processes like P1, P2, P3, and P4 are ready to be
executed inside the ready queue, and processes P1 and P2 are very memory
consuming so when the processes start executing there may be a scenario where
the memory will not be available for the execution of the process P3 and P4 as there
is a limited amount of memory available for process execution.
Swapping in the operating system is a memory management scheme that
temporarily swaps out an idle or blocked process from the main memory to
secondary memory which ensures proper memory utilization and memory
availability for those processes that are ready to be executed.

Page 74 of 159
Operating System

When that memory-consuming process goes into a termination state means


its execution is over due to which the memory dedicated to their execution becomes
free Then the swapped-out processes are brought back into the main memory and
their execution starts.
The area of the secondary memory where swapped-out processes are stored
is called swap space. The swapping method forms a temporary queue of swapped
processes in the secondary memory. In the case of high-priority processes, the
process with low priority is swapped out of the main memory and stored in swap
space then the process with high priority is swapped into the main memory to be
executed first.
The main goals of an operating system include Maximum utilization of the
CPU. This means that there should be a process execution every time,
the CPU should never stay idle and there should not be any Process
starvation or blocking.

There are two important concepts in the process of swapping which are as follows:
1. Swap In
2. Swap Out
Swap In: The method of removing a process from secondary memory (Hard Drive)
and restoring it to the main memory (RAM) for execution is known as the Swap
In method.
Swap Out: It is a method of bringing out a process from the main memory (RAM)
and sending it to the secondary memory (hard drive) so that the processes with
higher priority or more memory consumption will be executed known as the Swap
Out method.

Advantages of Swapping: The advantages of the swapping method are listed


as follows:
• Swapping in OS helps in achieving the goal of Maximum CPU Utilization.
• Swapping ensures proper memory availability for every process that needs to be
executed.
• Swapping helps avoid the problem of process starvation means a process should
not take much time for execution so that the next process should be executed.

Page 75 of 159
Operating System

• CPU can perform various tasks simultaneously with the help of swapping so that
processes do not have to wait much longer before execution.
• Swapping ensures proper RAM (main memory) utilization.
• Swapping creates a dedicated disk partition in the hard drive for swapped
processes which is called swap space.
• Swapping in OS is an economical process.
• Swapping method can be applied on priority-based process scheduling where
a high-priority process is swapped in and a low-priority process is swapped
out which improves the performance.

Disadvantages of Swapping: There are some limited disadvantages of the


swapping method which are listed as follows:
• If the system deals with power cuts during bulky swapping activity, then the
user may lose all the information that is related to the program.
• If the swapping method uses an algorithm that is not up to the mark, then the
number of page faults can be increased and therefore this decreases the
complete performance.
• There may be inefficiency in a case when there is some common resource used
by the processes that are participating in the swapping process.

MEMORY MANAGEMENT REQUIREMENT


Memory management in OS is the process of regulating and organizing
computer memory to allocate and deallocate memory space efficiently for programs
and applications that require it. This helps to guarantee that the system runs
efficiently and has enough memory to run apps and tasks.

Functions of Memory Management: Some of the major functions fulfilled


by memory management are discussed below.
• Memory Allocation: Memory management ensures that the needed memory
space is allocated to the new process whenever a process is created and requires
memory. Memory Management also keeps track of the system’s allocated and
free memory.
• Memory Deallocation: Like memory allocation, whenever a process completes
its execution, memory management ensures that the space and the memory
resource it holds are released. Any newly created process can use the freed
memory.
• Memory Sharing: Memory sharing is also one of the main goals of memory
management in OS. Some processes might require the same memory
simultaneously. Memory management ensures that this is made possible
without breaking any authorization rules.
• Memory Protection: Memory Protection refers to preventing any unauthorized
memory access to any process. Memory management ensures memory protection
by assigning correct permissions to each process.

Page 76 of 159
Operating System

Why is Memory Management Required


• Memory management allows computer systems to run programs that need more
main memory than the amount of free main memory available on the system. By
moving data between primary and secondary memory, memory management can
be achieved
• Memory management maintains track of the status of every memory location,
whether it is allocated or free.
• It is the responsibility of memory management to protect the memory allocated
to all the processes from being corrupted by other processes. The computer may
exhibit faulty/unexpected behaviour if this is not done.
• Memory management allows sharing of memory spaces between processes, with
the help of which multiple programs can consist at the same memory location.
• Memory management takes care of the system's primary memory by giving
abstraction such that the programs running on the system perceive a large
memory is assigned to them.

Role of Memory Management: Memory management in an operating system


(OS) is crucial in efficiently utilizing the computer's memory resources. It's
responsible for allocating and deallocating memory to different processes and
ensuring they don't interfere with each other. Memory management keeps track of
which parts of memory are in use and which are available, preventing conflicts and
crashes. It enables processes to share memory and optimizes memory usage to
avoid wastage. Proper memory management ensures that applications run
smoothly, preventing crashes due to memory exhaustion and enabling seamless
multitasking, making the computer more reliable and efficient.

DYNAMIC LOADING
The process of getting a program from secondary storage (hard disk) to the
main memory (RAM) is known as loading. In simple words, loading loads the
program in the main memory.
The entire program and all data of a process must be in physical memory for
the process to execute. The size of a process is thus limited to the size of physical
memory. To obtain better memory-space utilization, we can use dynamic loading.
With dynamic loading, a routine is not loaded until it is called. All routines are kept
on disk in a relocatable load format. The main program is loaded into memory and
is executed. When a routine needs to call another routine, the calling routine first
checks to see whether the other routine has been loaded. If not, the relocatable
linking loader is called to load the desired routine into memory and to update the
program's address tables to reflect this change. Then control is passed to the newly
loaded routine.
The advantage of dynamic loading is that an unused routine is never loaded.
This method is particularly useful when large amounts of code are needed to
handle infrequently occurring cases, such as error routines. In this case, although
the total program size may be large, the portion that is used (and hence loaded)
may be much smaller.

Page 77 of 159
Operating System

Dynamic loading does not require special support from the operating system.
It is the responsibility of the users to design their programs to take advantage of
such a method. Operating systems may help the programmer, however, by
providing library routines to implement dynamic loading.

DYNAMIC LINKING
Linking all the required modules of the program to continue the program
execution is known as linking. It takes the object code through an assembler and
combines them to make an executable module.

Dynamic linking in an operating system refers to the process of linking


program modules at runtime rather than at compile time. In dynamic linking, the
system loads and links the required libraries or functions when a program is
executed, instead of embedding all the library code into the executable during
compilation. This offers several benefits, such as reduced memory usage, easier
updates, and modular software development.

Working of Dynamic Linking:


Executable and Libraries Separation: The program is compiled without
directly including the library code in the executable. Instead, it includes references
to the shared libraries. These libraries are usually shared object files (.so in
Unix/Linux or .dll in Windows) that are loaded at runtime.
Loader and Linker at Runtime: When the program is launched, the operating
system’s dynamic linker/loader (like ld.so in Unix/Linux) takes over. It identifies
the required shared libraries and loads them into memory if they aren’t already
loaded. The addresses of the required functions or symbols from these libraries are
then resolved (i.e., mapped into the program's memory space).
Shared Libraries: Multiple programs can share the same library in memory. This
reduces redundancy and memory usage since the shared library code is loaded
only once and reused by multiple applications.

Advantages:
• Memory Efficiency: Programs don't need to include full copies of libraries,
leading to smaller executables and efficient use of memory.
• Upgradability: Libraries can be updated independently of the programs. A bug
fix or update to the library will affect all programs that use it without needing
recompilation.

Page 78 of 159
Operating System

• Modularity: Programs can be divided into modules, which can be dynamically


loaded as needed.

Disadvantages:
• Runtime Overhead: Loading libraries and resolving symbols at runtime
introduces a slight performance cost.
• Dependency Issues: If a required shared library is missing or incompatible, the
program might fail to execute properly. This is often referred to as "DLL hell" in
Windows.

MEMORY ALLOCATION METHOD


Two types of memory allocation techniques operating systems use:
contiguous and non-contiguous memory allocation. In contiguous memory
allocation, memory is assigned to a process in a contiguous block. In non-
contiguous memory allocation, memory is assigned to a process in non-adjacent
blocks.

Contiguous Memory Allocation: Contiguous memory allocation is a


technique where the operating system allocates a contiguous block of memory to a
process. This memory is allocated in a single, continuous chunk, making it easy for
the operating system to manage and for the process to access the memory.
Contiguous memory allocation is suitable for systems with limited memory sizes
and where fast memory access is important. Contiguous memory allocation can be
done in two ways.
• Fixed Partitioning: In fixed partitioning, the memory is divided into fixed-size
partitions, and each partition is assigned to a process. This technique is easy to
implement but can result in wasted memory if a process does not fit perfectly
into a partition.
• Dynamic Partitioning: In dynamic partitioning, the memory is divided into
variable-size partitions, and each partition is assigned to a process. This
technique is more efficient as it allows the allocation of only the required
memory to the process, but it requires more overhead to keep track of the
available memory.

Advantages of Contiguous Memory Allocation


• Simplicity: Contiguous memory allocation is a relatively simple technique for
memory management. It requires less overhead and is easy to implement.
• Efficiency: Contiguous memory allocation is an efficient technique for memory
management. Once a process is allocated contiguous memory, it can access the
entire memory block without any interruption.
• Low fragmentation: Since the memory is allocated in contiguous blocks, there
is a lower risk of memory fragmentation. This can result in better memory
utilization, as there is less memory wastage.

Disadvantages of Contiguous Memory Allocation

Page 79 of 159
Operating System

• Limited flexibility: Contiguous memory allocation is not very flexible as it


requires memory to be allocated in a contiguous block. This can limit the
amount of memory that can be allocated to a process.
• Memory wastage: If a process requires a memory size that is smaller than the
contiguous block allocated to it, there may be unused memory, resulting in
memory wastage.
• Difficulty in managing larger memory sizes: As the size of memory increases,
managing contiguous memory allocation becomes more difficult. This is because
finding a contiguous block of memory that is large enough to allocate to a
process becomes challenging.
• External Fragmentation: Over time, external fragmentation may occur as a
result of memory allocation and deallocation, which may result in non −
contiguous blocks of free memory scattered throughout the system.

Non-contiguous Memory Allocation: Non-contiguous memory


allocation, on the other hand, is a technique where the operating system allocates
memory to a process in non-contiguous blocks. The blocks of memory allocated to
the process need not be contiguous, and the operating system keeps track of the
various blocks allocated to the process. Non-contiguous memory allocation is
suitable for larger memory sizes and where efficient use of memory is important.
Non-contiguous memory allocation can be done in two ways.
• Paging: In paging, the memory is divided into fixed-size pages, and each page is
assigned to a process. This technique is more efficient as it allows the allocation
of only the required memory to the process.
• Segmentation: In segmentation, the memory is divided into variable-sized
segments, and each segment is assigned to a process. This technique is more
flexible than paging but requires more overhead to keep track of the allocated
segments.
Non-contiguous memory allocation is a memory management technique that
divides memory into non-contiguous blocks, allowing processes to allocate memory
that is not necessarily contiguous.

Advantages of Non-Contiguous Memory Allocation


• Reduced External Fragmentation: One of the main advantages of non-
contiguous memory allocation is that it can reduce external fragmentation, as
memory can be allocated in small, non-contiguous blocks.
• Increased Memory Utilization: Non-contiguous memory allocation allows for
more efficient use of memory, as small gaps in memory can be filled with
processes that need less memory.
• Flexibility: This technique allows for more flexibility in allocating and
deallocating memory, as processes can be allocated memory that is not
necessarily contiguous.
• Memory Sharing: Non-contiguous memory allocation makes it easier to share
memory between multiple processes, as memory can be allocated in non-
contiguous blocks that can be shared between multiple processes.

Disadvantages of Non-Contiguous Memory Allocation

Page 80 of 159
Operating System

• Internal Fragmentation: One of the main disadvantages of non-contiguous


memory allocation is that it can lead to internal fragmentation, as memory can
be allocated in small, non-contiguous blocks that are not fully utilized.
• Increased Overhead: This technique requires more overhead than contiguous
memory allocation, as the operating system needs to maintain data structures to
track memory allocation.
• Slower Access: Memory access can be slower than contiguous memory
allocation, as memory can be allocated in non-contiguous blocks that may
require additional steps to access.

SINGLE PARTITION ALLOCATION


The multiprogramming concept emphasizes maximizing CPU utilization by
overlapping CPU and I/O. Memory may be allocated as:
• Single large partition for processes to use or
• Multiple partitions with a single process using a single partition.
This approach keeps the Operating System in the lower part of the memory
and other user processes in the upper part. With this scheme, the Operating
System can be protected from updating in user processes. Relocation-register
scheme known as dynamic relocation is useful for this purpose. It not only protects
user processes from each other but also from changing OS code and data. Two
registers are used: the relocation register, which contains the value of the smallest
physical address, and the limit register, which contains the logical address range.
Both these are set by the Operating System when the job starts. At load time of the
program (i.e., when it has to be relocated) we must establish “addressability” by
adjusting the relocation register contents to the new starting address for the
program. The scheme is shown in Figure below.
The contents of a relocation register are implicitly added to any address
references generated by the program. Some systems use base registers as
relocation registers for easy addressability as these are within the programmer’s
control. Also, in some systems, relocation is managed and accessed by the
Operating System only.

0
R + Yes Is A
CPU
Physical Relocation A<L
Address register

L Error in Addressing
Max Limit
Register

In a dynamic relocation scheme if the logical address space range is 0 to


Max, then the physical address space range is R + 0 to R + Max (where R is
relocation register contents). Similarly, a limit register is checked by H/W to be

Page 81 of 159
Operating System

sure that the logical address generated by the CPU is not bigger than the program's
size.

MULTIPLE PARTITIONS
This is also known as a static partitioning scheme as shown in the following
Figure. A simple memory management scheme divides memory into n (possibly
unequal) fixed-sized partitions, each of which can hold exactly one process. The
degree of multiprogramming is dependent on the number of partitions. IBM used
this scheme for systems 360 OS/MFT (Multiprogramming with a fixed number of
tasks). The partition boundaries are not movable (must reboot to move a job). We
can have one queue per partition or just a single queue for all the partitions.
d
Partition 3
c
Partition 2
Multiple Job Queues b
Partition 1

a
Partition 1
0
Multiple Partition System
Initially, whole memory is available for user processes and is like a large
block of available memory. The operating system keeps details of available memory
blocks and occupied blocks in tabular form. OS also keeps track of the memory
requirements of each process. As processes enter into the input queue and when
sufficient space for it is available, the process is allocated space and loaded. After
its execution is over it releases its occupied space and OS fills this space with other
processes in the input queue. The block of available memory is known as a Hole.
Holes of various sizes are scattered throughout the memory. When any process
arrives, it is allocated memory from a hole that is large enough to accommodate it.
This example is shown in the Figure given below:

OS OS OS
200K
Process A (50K) Process A (50K) Process A (50K)
Partition 1
(100K) Hole (50K) Process B Hole (50K) Process D Hole (50K)

Partition 2 Process B (50K) Terminated Process B (50K) Arrives Process B (50K)


(200K) Hole (50K) Hole (50K) Hole (50K)

Process C (50K) Process C (50K) Process C (50K)


Partition 3
(500K) Hole (50K) Hole (50K) Hole (50K)
Fixed-sized Partition System
If a hole is too large, it is divided into two parts:
1) One that is allocated to the next process of the input queue
2) Added with set of holes.

Page 82 of 159
Operating System

Within a partition, if two holes are adjacent then they can be merged to
make a single large hole. However, this scheme suffers from a fragmentation
problem. Storage fragmentation occurs either because the user processes do not
completely accommodate the allotted partition or partition remains unused, if it is
too small to hold any process from the input queue. Main memory utilization is
extremely inefficient. Any program, no matter how small, occupies the entire
partition. In our example, process B takes 150K of partition2 (200K size). We are
left with a 50 K-sized hole. This phenomenon, in which there is wasted space
internal to a partition, is known as internal fragmentation. It occurs because the
initial process is loaded in a partition that is large enough to hold it (i.e., allocated
memory may be slightly larger than requested memory). “Internal” here means
memory that is internal to a partition, but is not in use.

Variable-sized Partition: This scheme is also known as dynamic partitioning.


In this scheme, boundaries are not fixed. Processes accommodate memory
according to their requirement. There is no wastage as the partition size is the
same as the size of the user process. Initially, when processes start this wastage
can be avoided but later on when they terminate, they leave holes in the main
storage. Other processes can accommodate these, but eventually, they become too
small to accommodate new jobs as shown in Figure 8.

OS OS OS

Process A Process A Hole Process D Process D

Terminated Arrives
Process B Process B Process B

Process C Process C Process C

IBM used this technique for OS/MVT (Multiprogramming with a Variable


Number of Tasks) as the partitions are of variable length and number. But still,
fragmentation anomaly exists in this scheme. As time goes on and processes are
loaded and removed from memory, fragmentation increases, and memory
utilization declines. This wastage of memory, which is external to partition, is
known as external fragmentation. In this, though there is enough total memory to
satisfy a request but as it is not contiguous and it is fragmented into small holes,
that can’t be utilised.

COMPACTION
Compaction is a memory management technique in which the free space of a
running system is compacted, to reduce fragmentation problems and improve
memory allocation efficiency. Compaction is used by many modern operating
systems, such as Windows, Linux, and Mac OS X. As in the fig we have some used
memory (black color) and some unused memory (white color). The used memory is
combined. All the empty spaces are combined. This process is called compaction.
This is done to prevent and solve the problem of fragmentation, but it requires too
much CPU time.

Page 83 of 159
Operating System

By compacting memory, the operating system can reduce or eliminate


fragmentation and make it easier for programs to allocate and use memory.
The compaction process usually consists of two steps:
• Copying all pages that are not in use to one large contiguous area.
• Then, write the pages that are in use into the newly freed space.

In compaction, all the holes are contiguous, and OS combines all the loaded
processes in different partitions. Now, merged holes can accommodate new
processes according to their needs. This method is also known as de-
fragmentation. Let us explain through the diagram.
At the time of compaction, the CPU stops the execution of the current
process because it will resume the process from somewhere else after compaction.
If the CPU does not stop the execution of a process, then it may execute the
instructions from somewhere else locations instead of the next instruction of the
same process in memory.
Advantages of Compaction
• Reduces external fragmentation.
• Make memory usage efficient.
• Memory becomes contiguous.
• Since memory becomes contiguous more processes can be loaded to
memory, thereby increasing the scalability of OS.
• Fragmentation of the file system can be temporarily removed by compaction.
• Improves memory utilization as there is less gap between memory blocks.
Disadvantages of Compaction
• System efficiency is reduced and latency is increased.
• A huge amount of time is wasted in performing compaction.
• CPU sits idle for a long time.
• Not always easy to perform compaction.
• It may cause deadlocks since it disturbs the memory allocation process.

RELOCATION

Page 84 of 159
Operating System

Operating systems can manage where each process is stored in memory


using a technique called relocation. The operating system is often stored in the
highest memory addresses.
When the program compiles and executes, it starts with zero (0) physical
address. The maximum address is equal to the total memory size minus the
operating system size. The process is loaded and allocated a contiguous segment of
memory. The first (smallest) physical address of the process is the base address
and the largest physical address the process can access is the limit address. There
are two methods of relocation:
Static Relocation: The operating system adjusts the memory address of a
process to reflect its starting position in memory. Once a process is assigned a
starting position in memory, it executes within the space it has been allocated.
Once the static relocation process has been completed, the operating system can
no longer relocate the process until it terminates. It is used by the IBM 360 as a
stop-gap solution. It is a slow and complicated process.
Dynamic Relocation: In this, the hardware adds a relocation register (base
value) to the virtual address generated by the compiler. The relocation register
allows translation to a physical memory address.

PAGING
Paging is a memory management technique for retrieving processes from
secondary memory storage units as pages and stored in the main memory’s frame.
When a program needs to access data, it sends a request to access a process
to the operating system, which stores the process in the main memory from the
secondary memory. Each process is divided into small fixed-sized chunks
called pages; similarly, the main memory will be divided into equal-fixed-sized
pieces called frames. The process pages are stored at a different location in the
main memory. The thing to note here is that the size of the page and frame will be
the same.

Like in the above fig, every page in secondary memory is 2 Kb, and in the
same way, every frame in main memory is also 2 Kb.
The problem is that physical memory is finite. When all of the spaces in
physical memory are filled with requests, the operating system has to start
swapping the processes that are not in use to make room for new ones. This
process is called swapping.

Page 85 of 159
Operating System

We have two types of addresses.


1. Logical address: An address generated by the CPU is commonly referred to as
a logical address. A set of logical addresses generated by a program is a logical
address space.
2. Physical address: An address seen by the memory unit - that is, the one loaded
into the memory address register-is commonly referred to as a physical address.
A set of physical addresses corresponding to these logical addresses is physical
address space.

Advantages of Paging
• Conserve memory by only keeping the active pages in memory. This is especially
helpful in large-scale systems where memory is scarce.
• Enables the operating system to manage more processes by allowing each
process to have its dedicated memory space. This maximizes efficiency and
performance by allowing the operating system to schedule and run each process
without conflicts.
• Allows for greater flexibility and scalability regarding the size and complexity of
the systems that can be created.
• Parts of the program are allowed to be stored at different locations in the main
memory.
• It solves the problem of external fragmentation.
• Swapping becomes very easy due to equal-sized pages and frames.

Disadvantages of Paging
• It can be very inefficient. When a process needs more memory, the operating
system must find a block of unused memory and copy it to the process.
• This process can take a long time and, in some cases, can even crash the
system.
• Paging can cause internal fragmentation, which makes the system run more
slowly.
• The page table is there, which takes some memory space.
• Have to maintain a page table for each process.
• Memory access time increases as the page table needs to be accessed.

When to Use Paging


• You have too many processes running and not enough physical memory to store
them all.
• When you have a large process that can be split into multiple pages.
• When you want to load a process into physical memory without loading the
entire process.

SEGMENTATION

Page 86 of 159
Operating System

Segmentation divides processes into smaller subparts known as modules.


The divided segments need not be placed in contiguous memory. Since there is no
contiguous memory allocation, internal fragmentation does not take place. The
length of the segments of the program and memory is decided by the purpose of the
segment in the user program.
We can say that logical address space or the main memory is a collection of
segments.

Types of Segmentation: Segmentation can be divided into two types:

1. Virtual Memory Segmentation: Virtual Memory Segmentation divides the


processes into n number of segments. All the segments are not divided at a time.
Virtual Memory Segmentation may or may not take place at the run time of a
program.
2. Simple Segmentation: Simple Segmentation also divides the processes
into n number of segments but the segmentation is done all together at once.
Simple segmentation takes place at the run time of a program. Simple
segmentation may scatter the segments into the memory such that one segment
of the process can be at a different location than the other (in a noncontinuous
manner).

Why Segmentation is required? Segmentation came into existence because


of the problems with the paging technique. In the case of the paging technique, a
function or piece of code is divided into pages without considering that the relative
parts of code can also get divided. Hence, for the process in execution, the CPU
must load more than one page into the frames so that the complete related code is
there for execution. Paging took more pages for a process to be loaded into the
main memory. Hence, segmentation was introduced in which the code is divided
into modules so that related code can be combined in one single block.
Other memory management techniques have also an important drawback -
the actual view of physical memory is separated from the user's view of physical

Page 87 of 159
Operating System

memory. Segmentation helps in overcoming the problem by dividing the user's


program into segments according to the specific need.

Advantages of Segmentation in OS
• No internal fragmentation is there in segmentation.
• Segment Table is used to store the records of the segments. The segment table
itself consumes less memory as compared to a page table in paging.
• Segmentation provides better CPU utilization as an entire module is loaded at
once.
• Segmentation is near to the user's view of physical memory. Segmentation allows
users to partition the user programs into modules. These modules are nothing
but the independent codes of the current process.
• The Segment size is specified by the user but in Paging, the hardware decides
the page size.
• Segmentation can be used to separate the security procedures and data.

Disadvantages of Segmentation in OS
• During the swapping of processes, the free memory space is broken into small
pieces, which is a major problem in the segmentation technique.
• Time is required to fetch instructions or segments.
• The swapping of segments of unequal sizes is not easy.
• There is an overhead of maintaining a segment table for each process as well.
• When a process is completed, it is removed from the main memory. After the
execution of the current process, the unevenly sized segments of the process are
removed from the main memory. Since the segments are of uneven length it
creates unevenly sized holes in the main memory. These holes in the main
memory may remain unused due to their very small size.

Characteristics of Segmentation in OS: Some of the characteristics of


segmentation are discussed below:
• Segmentation partitions the program into variable-sized blocks or segments.
• Partition size depends upon the type and length of modules.
• Segmentation is done considering that the relative data should come in a
single segment.
• Segments of the memory may or may not be stored continuously depending
upon the segmentation technique chosen.
• The Operating System maintains a segment table for each process.

Example of Segmentation: Let's take the example of segmentation to


understand how it works.
Let us assume we have five segments namely: Segment-0, Segment-1,
Segment-2, Segment-3, and Segment-4. Initially, before the execution of the
process, all the segments of the process are stored in the physical memory space.
We have a segment table as well. The segment table contains the beginning entry
address of each segment (denoted by base). The segment table also contains the
length of each of the segments (denoted by limit).

Page 88 of 159
Operating System

As shown in the image below, the base address of Segment-0 is 1400 and its
length is 1000, the base address of Segment-1 is 6300 and its length is 400, the
base address of Segment-2 is 4300 and its length is 400, and so on.
The pictorial representation of the above segmentation with its segment table
is shown below.

SEGMENTATION WITH PAGING


Paged segmentation and segmented paging are two memory management
techniques that offer the advantages of both paging and segmentation A memory
management technique known as Segmentation with Paging in OS combines the
benefits of both segmentation and paging.
The main memory is split into variable-size segments, which are
subsequently partitioned into segmentation with paging in OS. Each segment has a
page table, and each process has many page tables.
Each table includes information for each segment page, whereas the
segment table has information for each segment. Page tables are linked to segment
tables and segment tables to individual pages within a segment.

Segmented Paging: Segmented paging is a way to manage computer memory


that breaks it into small chunks called segments, and within each segment, there
are fixed-sized pages. It's like dividing a big book into chapters, and each chapter
into pages, to make it easier to read and manage.
Pages are created from segments. Implementation necessitates STR (segment
table register) and PMT (page map table). Each virtual address in this method
consists of a segment number, a page number, and an offset within that page. The
segment number indexes into the segment table, which returns the page table's
base address for that segment.

Page 89 of 159
Operating System

The page number is an index into the page table, each item of which
represents a page frame. The physical address is obtained by adding the PFN (page
frame number) and the offset. As a result, addressing may be defined by the
function:
va = (s,p,d)
here,
va is the virtual address,
s determines the number of segments (size of ST),
p determines the number of pages per segment (size of PT),
d determines page size.

Advantages of Segmented Paging: The benefits of segmented paging are as


follows:
• Each segment is represented by a single entry in the segment table. It lowers
memory use
• The segment size determines the size of the Page Table
• It reduces the issue of external fragmentation

Disadvantages of Segmented Paging: The downsides of segmented paging are


as follows:
• Internal fragmentation plagues segmented paging
• When compared to paging, the complexity level is substantially higher
• Managing both segmentation and paging tables increases overhead
• Require additional effort to design and implement

Paged Segmentation: Paged Segmentation is used to solve the problem


segmented paging has. In this, every process has a different number of segments.
When the segment table is large, it causes external fragmentation due to varying
segment table sizes. So to solve this, the process requires the segment table to be
paged. Also, the page table can have invalid pages even after Segmentation paging.
So paged Segmentation is used instead of multi-level paging along with segmented
paging, giving the optimum solution to the larger tables.

Page 90 of 159
Operating System

Advantages of Paged Segmentation


• It offers protection in specific segments
• It uses less memory than paging
• There is no external fragmentation
• It optimizes resource allocation
• Easiness in adding or removing segments and pages
• It also facilitates multi-programming
• It supports large programs
Disadvantages of Paged Segmentation
• It is costly
• In the case of swapping, segments with different sizes are not good
• Programmer intervention is required
• introduce additional overhead in terms of memory and processing
• adds complexity to memory management
• fragmentation can still occur over time
• The learning curve is steep

Difference Between Paging and Segmentation


Paging Segmentation
In paging, the program is divided into In segmentation, the program is divided
fixed or mounted-size pages. into variable-size segments.
For the paging, the operating system is For segmentation, the compiler is
accountable. accountable.
Here, the segment size is given by the
Page size is determined by hardware.
user.

It is faster in comparison to Segmentation is slow.

Page 91 of 159
Operating System

segmentation.
Paging could result in internal Segmentation could result in external
fragmentation. fragmentation.
In paging, the logical address is split Here, the logical address is split into
into a page number and a page offset. segment number and segment offset.
Paging comprises a page table that Segmentation also comprises the
encloses the base address of every segment table which encloses the
page. segment number and segment offset.
In segmentation, the operating system
In paging, the operating system must
maintains a list of holes in the main
maintain a free frame list.
memory.
Paging is invisible to the user. Segmentation is visible to the user.
In paging, the processor needs the page In segmentation, the processor uses
number, and offset to calculate the segment number, and offset to calculate
absolute address. the full address.
It is hard to allow the sharing of Facilitates sharing of procedures
procedures between processes. between the processes.
n paging, a programmer cannot It can efficiently handle data
efficiently handle data structure. structures.
Easy to apply for protection in
This protection is hard to apply.
segmentation.
The size of the page needs to always be There is no constraint on the size of
equal to the size of the frames. segments.
A page is referred to as a physical unit A segment is referred to as a logical
of information. unit of information.
Segmentation results in a more efficient
Paging results in a less efficient system.
system.

PROTECTION
When several users share computer resources such as CPU, memory, and
other resources, security is more crucial. It is the job of the operating system to
provide a mechanism that protects each process from other processes. All assets
that require protection in a multiuser environment are categorized as objects, and
individuals who seek to access these things are referred to as subjects. Distinct
'access privileges are granted to different subjects by the operating system.
Protection is a method that limits the access of programs, processes, or
users to the resources defined by a computer system. Protection can be used to
allow several users to safely share a common logical namespace, such as a
directory or files, in multi-programming operating systems. It necessitates the
safeguarding of computer resources such as software, memory, and processors. As
assistance to multiprogramming OS, users should apply protective steps so that

Page 92 of 159
Operating System

several users can safely access a common logical namespace like a directory or
data. Maintaining secrecy, honesty, and availability in the OS might provide
protection. The device must be protected against unauthorized access, viruses,
worms, and other malware.

Why is it important? There could be security issues such as illegal reading,


writing, or modification, or the system failing to function properly for authorized
users. It helps protect data, processes, and programs from unauthorized user or
program access. It is critical to guarantee that there are no breaches in access
permissions, malware, or illegal access to existing data. Its goal is to ensure that
only the policies of the systems have access to programs, resources, and data.

Need of protection OS
• Isolation: Protection OS ensures isolation between different processes and
users, preventing unauthorized access to resources.
• Security: It protects system resources, such as memory and files, from
unauthorized access, modification, or corruption.
• Stability: Protection OS enhances system stability by preventing one process
from interfering with or crashing other processes.
• Fairness: It ensures fair resource allocation among competing processes,
preventing one process from monopolizing system resources.

Goals of Protection: Protection mechanisms in an operating system serve


several goals:
• Confidentiality: Ensuring that sensitive information is accessible only to
authorized users or processes.
• Integrity: Guaranteeing that data remains unaltered and trustworthy
throughout its lifecycle.
• Availability: Ensuring that resources and services are available and accessible
to authorized users when needed.
• Isolation: Preventing interference and unauthorized access between different
processes, users, or components.
• Auditability: Providing mechanisms for tracking and monitoring system
activities to detect and investigate security incidents or violations.

FILE MANAGEMENT SYSTEM


Files is a collection of co-related information that is recorded in some format
(such as text, pdf, docs, etc.) and is stored on various storage mediums such as
flash drives, hard disk drives (HDD), magnetic tapes, optical disks, and tapes, etc.
Files can be read-only or read-write. Files are simply used as a medium for
providing input(s) and getting output(s).
So, file management is one of the basic but important features provided by
the operating system. File management in the operating system is nothing but

Page 93 of 159
Operating System

software that handles or manages the files (binary, text, pdf, docs, audio, video,
etc.) present in computer software. The file system in the operating system is
capable of managing individual as well as groups of files present in the computer
system.
The file management in the operating system manages all the files present in
the computer system with various extensions (such as .exe, .pdf, .txt, .docx, etc.)
We can also use the file system in the operating system to get details of any file(s)
present on our system. The details can be:
• location of the file (the logical location where the file is stored in the computer
system)
• the owner of the file (who can read or write on the particular file)
• when was the file created (time of file creation and modification time)
• a type of file (format of the file for example text, pdfs, docs, etc.)
• state of completion of the file, etc.
For file management in the operating system or to make the operating
system understand a file, the file must be in a predefined structure or format.
There are three types of file structures present in the operating systems:
1. text file: A text file is a non-executable file containing a sequence of numbers,
symbols, and letters organized in the form of lines.
2. source file: A source file is an executable file that contains a series of functions
and processes. In simple terms, we can say that a source file is a file that
contains the instructions of a program.
3. object file: An object file is a file that contains object codes in the form of
assembling language code or machine language code. In simple terms, we can
say that object files contain program instructions in the form of a series of
bytes and are organized in the form of blocks.

Functions of File Management in Operating System: Now let us talk


about some of the most important functions of file management in operating
systems.
• Allows users to create, modify, and delete files on the computer system.
• Manages the locations of files present on the secondary memory or primary
memory.
• Manages and handles the permissions of a particular file for various users and
groups.
• Organize the files in a tree-like structure for better visualization.
• Provides interface to I/O operations.
• Secures files from unauthorized access and hackers.

Objectives of File Management in Operating Systems: In the last


section, we gained a good basic understanding of files, operating systems, and file
management in operating systems. Let us now learn some of the objectives of file
management in operating systems.

Page 94 of 159
Operating System

• The file management in the operating system allows users to create a new file,
and modify and delete the old files present at different locations of the computer
system.
• The operating system file management software manages the locations of the file
store so that files can be extracted easily.
• As we know, process shares files so, one of the most important features of file
management in operating systems is to make files sharable between processes. It
helps the various processes to securely access the required information from a
file.
• The operating system file management software also manages the files so that
there is very little chance of data loss or data destruction.
• The file management in the operating system provides input-output operation
support to the files so that the data can be written, read, or extracted from the
file(s).
• It also provides a standard input-output interface for the user and system
processes. The simple interface allows the easy and fast modification of data.
• The file management in the operating system also manages the various user
permissions present on a file. There are three user permissions provided by the
operating system, they are: read, write, and execute.
• The file management in the operating system supports various types of storage
devices such as flash drives, hard disk drives (HDD), magnetic tapes, optical
disks, tapes, etc., and it also allows the user(s) to store and extract them
conveniently.
• It also organizes the files in a hierarchal manner in the form of files and folders
(directories) so that management of these files can be easier from the user's
perspective as well. Refer to the diagram below for better visualization.

Advantages of File Management in OS: Some of the main advantages that


the file system in the operating system provides are:
• Protection of the files from unauthorized access.
• Recovers the free space created when files are removed or deleted from the hard
disk.
• Assigns the disk space to various files with the help of disk management
software of the operating system.
• As we know, a file may be stored at various locations in the form of segments so
the file management in the operating system also keeps track of all the blocks or
segments of a particular file.
• Helps to manage the various user permissions so that only authorized persons
can perform the file modifications.
• It also keeps our files secure from hackers with the help of security management
in the operating system.

Disadvantages of File Management in OS: Some of the main advantages


that the file system in the operating system provides are:
• If the size of the files becomes large then the management takes a good amount
of time due to hierarchical order.

Page 95 of 159
Operating System

• To get more advanced management features, we need an advanced version of the


file management system. One of the advanced features is the document
management feature (DMS) which can organize important documents.
• The file system in the operating system can only manage the local files present in
the computer system.
• Security can be an issue sometimes as a virus in a file can spread across various
other files due to a tree-like (hierarchal) structure.
• Due to the hierarchal structure, file accessing can be slow sometimes.

FILE ACCESSING METHODS


A file is a collection of bits/bytes or lines that is stored on secondary storage
devices like a hard drive (magnetic disks).
File access methods in OS are nothing but techniques to read data from the
system's memory. There are various ways in which we can access the files from the
memory like:
• Sequential Access
• Direct/Relative Access, and
• Indexed Sequential Access.
These methods by which the records in a file can be accessed are referred to
as the file access mechanism. Each file access mechanism has its own set of
benefits and drawbacks, which are discussed further in this article.

Types of File Access Methods


Sequential Access: The operating system reads the file word by word in a
sequential access method of file accessing. A pointer is made, which first links to
the file's base address. If the user wishes to read the first word of the file, the
pointer gives it to them and raises its value to the next word. This procedure
continues till the file is finished. It is the most basic way of file access. The data in
the file is evaluated in the order that it appears in the file and that is why it is easy
and simple to access a file's data using a sequential access mechanism. For
example, editors and compilers frequently use this method to check the validity of
the code.

Advantages of Sequential Access:


• The sequential access mechanism is very easy to implement.
• It uses lexicographic order to enable quick access to the next entry.

Disadvantages of Sequential Access:

Page 96 of 159
Operating System

• Sequential access will become slow if the next file record to be retrieved is not
present next to the currently pointed record.
• Adding a new record may need relocating a significant number of records of the
file.

Direct (or Relative) Access: A Direct/Relative file access mechanism is


mostly required with the database systems. In the majority of the circumstances,
we require filtered/specific data from the database, and in such circumstances,
sequential access might be highly inefficient. Assume that each block of storage
holds four records and that the record we want to access is stored in the tenth
block. In such a situation, sequential access will not be used since it will have to
traverse all of the blocks to get to the required record, while direct access will allow
us to access the required record instantly.
The direct access mechanism requires the OS to perform some additional
tasks but eventually leads to much faster retrieval of records as compared to
sequential access.

Advantages of Direct/Relative Access:


• The files can be retrieved right away with a direct access mechanism, reducing
the average access time of a file.
• There is no need to traverse all of the blocks that come before the required block
to access the record.
Disadvantages of Direct/Relative Access:
• The direct access mechanism is typically difficult to implement due to its
complexity.
• Organizations can face security issues as a result of direct access as the users
may access/modify the sensitive information. As a result, additional security
processes must be put in place.

Indexed Sequential Access: It's the other approach to accessing a file


that's constructed on top of the sequential access mechanism. This method is
practically similar to the pointer-to-pointer concept in which we store the address
of a pointer variable containing the address of some other variable/record in
another pointer variable. The indexes, similar to a book's index (pointers), contain a
link to various blocks present in the memory. To locate a record in the file, we first

Page 97 of 159
Operating System

search the indexes and then use the pointer-to-pointer concept to navigate to the
required file.
Primary index blocks contain the links of the secondary inner blocks which contain
links to the data in the memory.

Advantages of Indexed Sequential Access:


• If the index table is appropriately arranged, it accesses the records very quickly.
• Records can be added at any position in the file quickly.

Disadvantages of Indexed Sequential Access:


• When compared to other file access methods, it is costly and less efficient.
• It needs additional storage space.

FILE DIRECTORIES
On a computer, a directory is used to store, arrange, and segregate files and
folders. It is similar to a telephone directory in that it just contains lists of names,
phone numbers, and addresses rather than the real papers. It uses a hierarchical
structure to organize files and directories. On many computers, directories are
referred to as drawers or folders, much like a workbench or a standard filing
cabinet in an office. You may, for instance, create a directory for images and
another for all of your documents. You could easily access the type of file you
wanted to see by saving particular file types in a folder.

Page 98 of 159
Operating System

There are several logical structures of a directory, these are given below.
• Single-level directory
• Two-level directory
• Tree structure or hierarchical directory
• Acyclic graph directory
• General graph directory structure and Data Structure

Single-Level Directory Structure: It is the simplest directory structure. In a


single-level directory, there is only one directory in the system, meaning there is
only one folder, and all the files are stored in that single directory. There is no way
to segregate important files from non-important files.
Implementation of a single-level directory is the simplest. However, there are
various disadvantages to it. The pictorial representation of a single-level directory is
given below. There is only one directory (root directory), and all the files are stored
in the same directory. Here f1, f2, f3, f4, f5 represent the five different files.
Practically it can be thought of as a structure where all the files are stored in the
same folder.

Advantages of single-level directory


• The main advantage of a single-level directory is that it is very simple to
implement.
• Since all the files are present in the same directory, in case the number of files is
less, then searching for a particular file is faster and easier.
• Simple operations like file creation, search, deletion, and updating are possible
with a single-level directory structure.
• The single-level directory is easier to understand in practical life.

Disadvantages of single-level directory


• In case we want to organize the files in some groups, it is not possible to do so
since we cannot create subdirectories.
• Two file names cannot be the same. In case two files are given the same name,
the previous one is overridden.
• If the number of files is very large, searching a particular file is very inefficient.
• Segregation of important and unimportant files is not possible.
• The single-level directory is not useful for multi-user systems.

Two-Level Directory Structure: We saw how the single-level directory


proves to be inefficient if multiple users are accessing the system. If two different

Page 99 of 159
Operating System

users wanted to create a file with the same name (say report.doc), it was not
allowed in a single-level directory.
In a two-level directory structure, there is a master node that has a separate
directory for each user. Each user can store the files in that directory. It can be
practically thought of as a folder that contains many folders, each for a particular
user, and now each user can store files in the allocated directory just like a single-
level directory.
The pictorial representation of a two-level directory is shown below. For
every user, there is a separate directory. At the next level, every directory stores the
files just like a single-level directory. Although not very efficient, the two-level
directory is better than a single-level directory structure.

Advantages of a two-level directory


• Searching is very easy.
• There can be two files with the same name in two different user directories.
Since they are not in the same directory, the same name can be used.
• Grouping is easier.
• A user cannot enter another user’s directory without permission.
• Implementation is easy.
Disadvantages of two-level directory
• One user cannot share a file with another user.
• Even though it allows multiple users, still a user cannot keep two same-type
files in a user directory.
• It does not allow users to create subdirectories.

Tree-Structured Directory Structure: This type of directory is used in our


PCs. The biggest disadvantage of a two-level directory was that one could not create
sub-directories in a directory. The tree-structured directory solved this problem. In
a tree-structured directory, there is a root directory at the peak. The root directory
contains directories for each user. The users can, however, create subdirectories
inside their directory and also store the files.

Page 100 of 159


Operating System

The pictorial representation of a tree-structured directory is shown below.


The root directory is highly secured, and only the system administrator can access
it. We can see how there can be subdirectories inside the user directories. A user
cannot modify the root directory data. Also, a user cannot access another user's
directory.
Advantages of tree-structured directory
• Highly scalable compared to the previous two types of directories.
• Allows subdirectories inside a directory.
• Searching is easy.
• Allows grouping.
• Segregation of important and unimportant files is easy.
Disadvantages of tree-structured directory
• As one user cannot enter another user’s directory, this restricts sharing of
files.
• Too many subdirectories may make the search complicated.
• Users cannot modify the root directory’s data.
• Files might have to be saved in multiple directories in case all of them do not
fit into one.

Acyclic Graph Directory Structure: Suppose there is a file abcd.txt. Out of


the three types of directories we studied above, none of them provide the flexibility
to access the file abcd.txt from multiple directories, i.e., we cannot access a
particular file or subdirectory from two or more directories. The file or the
subdirectory can be accessed only by the directory it is present inside.
The solution to this problem is presented in the acyclic graph directory. In
this type of directory, we can access a file or a subdirectory from multiple
directories. Hence files can be shared between directories. It is designed in such a
way that multiple directories point to a particular directory or file with the help of
links.

Page 101 of 159


Operating System

Advantages of acyclic- graph directory


• Allows sharing of files or subdirectories from more than one directory.
• Searching is very easy.
• Provides more flexibility to the users.
Disadvantages of acyclic-graph directory
• Harder to implement in comparison to the previous three.
• Since the files are accessed from multiple directories, deleting a file may
cause some errors if the user is not cautious.
• If the files are linked by a hard link, then it is necessary to delete all the
references to that file to permanently delete the file.

General-Graph Directory Structure: This is an extension to the acyclic-


graph directory. In the general-graph directory, there can be a cycle inside a
directory.

In the above image, we can see that a cycle is formed in the user 2 directory.
Although it provides greater flexibility, it is complex to implement this structure.
Advantages of General-graph directory
• Compared to the others, the General-Graph directory structure is more
flexible.
• Cycles are allowed in the directory for general graphs.

Page 102 of 159


Operating System

Disadvantages of General-graph directory


• It costs more than alternative solutions.
• Garbage collection is an essential step here.

FILE ALLOCATION METHODS


When a hard drive is formatted, a system has numerous tiny spaces to store
files called blocks. The operating system stores data in memory blocks using
various file allocation methods, which enables the hard disk to be used efficiently
and the file to be accessed.

Types of File Allocation Methods


• Contiguous File Allocation
• Linked File Allocation
• Indexed File Allocation
• File Allocation Table (FAT)
• Inode

Contiguous File Allocation: In the contiguous allocation methods, the files


are allocated the disk blocks in a contiguous manner, meaning that if a file's
starting disk block address is x, then it will be allocated the blocks with address
x+1, x+2, x+3,……., provided the blocks are not already occupied.
Suppose a file abcd.doc has a starting disk block address of 2, and it
requires four such blocks; hence in the contiguous allocation method, the file will
be allocated the disk blocks with the addresses 2, 3, 4, and 5.

The operating system also maintains a directory table that includes the file
name along with the starting address and the length of the blocks allocated. The
length represents the number of disk blocks required by the file.
In the above figure, it can be seen that the file "os.pdf" requires four disk
blocks and its starting disk block address is 1. Therefore, the blocks allocated to
the file are 1, 2, 3, and 4. Similarly, for “dbms.doc” the blocks allocated are 7, 8,
and 9 since its starting address is 7 and length is 3.
Advantages of contiguous file allocation

Page 103 of 159


Operating System

• Since the blocks are allocated in sequential order, therefore it can be accessed
sequentially since the starting address and the length are already available in
the directory table.
• The block allocation is similar to the array. Given the starting address, we can
"jump" to any block address by simply adding the block size to the starting
address, just as we do while accessing any block in an array. Hence the
contiguous allocation also allows random access to the blocks.
• The seek time is less because of the contiguous allocation of blocks. This makes
it very fast.
Disadvantages of contiguous file allocation
• It suffers from internal fragmentation. Suppose the size of a block is 2KB, but
the file that has to be stored is just 1 KB. In that case, an extra 1KB remains
unutilized and the memory is wasted.
• It suffers from external fragmentation. If there are sufficient blocks available to
store a file, but if they are not contiguous, the file cannot be stored.
• The size of the file can be increased only if free contiguous disk blocks are
available.
• Although it allows fast access, memory management is poor.

Linked File Allocation: The linked allocation works just like the linked list.
The problem with contiguous allocation was that memory remained unutilized due
to external fragmentation. The solution to the problem was to allocate the disk
block in the form of a linked list where every block was a node.
The blocks are allocated in such a way that every block contains a pointer to
the next block that is allocated to the file.

The above image shows how the linked allocation works. The file "os.pdf" has
to be allocated some blocks. The first block allocated is 4. Block 4 will have a
pointer to block 8, block 8 will have a pointer to block 10, block 10 will have a
pointer to block 11, block 11 will have a pointer to block 2, and finally, block 2 will
point to 3. In this manner, a total of six blocks are allocated to the file non-
contiguously. The ending block (block 3) will not point to any other block.
Advantages of linked file allocation
• There is no external fragmentation because blocks can be allocated in random
order with the help of pointers. Contiguous allocation is not required.

Page 104 of 159


Operating System

• File size can be increased, even if the contiguous blocks are not available,
provided there are enough blocks to store the file.
• Judicious use of memory.

Disadvantages of linked file allocation


• Random access is not allowed since the memory allocation is not contiguous.
Therefore, with the help of the starting block address, we cannot directly jump to
some other block, just as we cannot jump to any node in the linked list with just
the head pointer.
• It is relatively slow because of more seek time.
• Every block needs to contain extra information about the pointer to the next
block.

Indexed File Allocation: Although linked allocation efficiently used the


memory, it was slower than contiguous allocation. There was a need for an
allocation method such that the disk blocks are used properly and also the access
time is less. This is where indexed allocation comes to the rescue. It can be thought
of as a mixture of linked and contiguous allocation that is more efficient in terms of
space and time.
There are index blocks that contain pointers to the blocks occupied by the
file. Thus every file has its index block, which is a disk block, but instead of storing
the file itself, it contains the pointers to other blocks that store the file. This helps
in randomly accessing the files and also avoiding external fragmentation.

From the above image, we can see that block number 8 does not store the
file but contains the pointers to various other blocks, which store the file. The
directory table contains only the file name and the index block for the respective
files.
Below is the pictorial representation of index block 8, which contains the
pointers that determine the address of the blocks that store the "os.pdf" file.

Page 105 of 159


Operating System

Since the size of every block is limited, there will be problems if the numbers
of pointers to other blocks are very large in number such that a block is not
sufficient to store it.
Advantages of Indexed allocation
• No external fragmentation.
• Allows random access to disk blocks.
• Allows direct access, reducing complexity.
Disadvantages of Indexed allocation
• It is very complex.
• Extra memory for index blocks.
• Large pointer overhead.
• For very large files, a single index block may not be able to hold all the
pointers.

File Allocation Table (FAT): The File Allocation Table (FAT) is a file system
format commonly used in older computer systems and removable storage devices.
It organizes data storage by maintaining a table that tracks the allocation status of
individual file clusters on a storage medium. While less common today, FAT was
instrumental in early computing, providing a straightforward way to manage files
and directories.
Advantages of File Allocation Table (FAT)
• FAT is widely supported across different operating systems and devices, making
it easy to share data between various platforms without compatibility issues.
• FAT's straightforward structure makes it relatively easy to implement and
understand, which was particularly advantageous in the early days of
computing.
• The minimalistic design of FAT requires less storage space and processing power
compared to more modern file systems, making it suitable for systems with
limited resources.
• Due to its simplicity, FAT file systems are often recoverable using basic data
recovery tools, allowing for potential retrieval of lost or deleted data.

Page 106 of 159


Operating System

Disadvantages of File Allocation Table (FAT)


• FAT lacks advanced metadata capabilities, such as support for extended file
attributes, which limits the types of information that can be associated with files.
• As files are created, modified, and deleted, the FAT file system can suffer from
fragmentation, leading to slower file access times and inefficient use of storage
space.
• FAT does not offer robust security features or fine-grained access controls,
leaving files more vulnerable to unauthorized access and modifications.
• FAT file systems have limitations on file sizes and partition sizes, which can be
restrictive when dealing with large files or storage devices.

Inode: An inode, short for "index node," is a data structure in Unix-like file
systems that stores metadata about a file, such as its permissions, ownership, size,
and location of data blocks on the storage device. Inodes play a crucial role in
efficient file management and data retrieval, as they enable the operating system to
quickly locate and access files. Each file or directory on the file system corresponds
to an inode, allowing for organized and optimized storage allocation.
Advantages of Inode
• Inodes enable rapid access to file metadata and data blocks, making file
operations like opening, reading, and writing faster and more efficient.
• Inodes allow sparse files—files with unallocated gaps—to be managed effectively,
as they only allocate space for actual data blocks, optimizing storage usage.
• Inodes facilitate the creation of hard links, which are multiple directory entries
pointing to the same inode.
• Inodes enhance file system stability by maintaining data consistency and
reducing the risk of data corruption.
Disadvantages of Inode
• Inodes consume a fixed amount of storage space regardless of file size.
• File systems have a finite number of inodes available.
• As directories grow in size, the number of inodes used to represent directory
entries can increase.
• Inode allocation and management can contribute to storage fragmentation.

REVIEW QUESTIONS
1. Write short notes on (1) Swapping and (2) Memory Allocation.
2. What is dynamic loading and dynamic linking? Explain.
3. Describe the following memory allocation methods: (1) Single partition
allocation and (2) Multiple partition allocation.
4. What is paging? Explain.
5. Explain segmentation with paging.
6. Write short notes on (i) Paging and (ii) Compaction.

Page 107 of 159


Operating System

7. What is swapping? Explain the swap-in and swap-out process with a well-
labelled diagram.
8. Explain memory management requirements.
9. Explain: (i) Logical vs. physical address space and (ii) Internal vs. external
fragmentation.
10. Explain the single partition allocation mechanism with an example.
11. Explain the concept of paging.
12. What is Paging? Explain. Write the advantages and disadvantages of paging.
13. What is Segmentation? Explain. Write advantages of segmentation.
14. What is Long Term Scheduling?
15. Explain Relocation.
16. Explain the method of multiple partition memory management.
17. Write a short note on 'relocation and protection'.
18. What is disk space management? Explain record blocking.
19. Differentiate between logical and physical address space.
20. Differentiate between paging and segmentation.

Page 108 of 159

You might also like