0% found this document useful (0 votes)
83 views

Memory Management - PQ

memorry management notes of os

Uploaded by

Saurabh Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Memory Management - PQ

memorry management notes of os

Uploaded by

Saurabh Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

The term memory can be defined as a collection of data in a specific format.

It is used to store
instructions and process data. The memory comprises a large array or group of words or bytes, each
with its own location. The primary purpose of a computer system is to execute programs. These
programs, along with the information they access, should be in the main memory during execution.
The CPU fetches instructions from memory according to the value of the program counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory management is
important. Many memory management methods exist, reflecting various approaches, and the
effectiveness of each algorithm depends on the situation.
Here, we will cover the following memory management topics:
 What is Main Memory?
 What is Memory Management?
 Why Memory Management is Required?
 Logical Address Space and Physical Address Space
 Static and Dynamic Loading
 Static and Dynamic Linking
 Swapping
 Contiguous Memory Allocation
o Memory Allocation
o First Fit
o Best Fit
o Worst Fit
o Fragmentation
o Internal Fragmentation
o External Fragmentation
o Paging

Before we start Memory management, let us know what is main memory is.
What is Main Memory?
The main memory is central to the operation of a Modern Computer. Main Memory is a large array of
words or bytes, ranging in size from hundreds of thousands to billions. Main memory is a repository of
rapidly available information shared by the CPU and I/O devices. Main memory is the place where
programs and information are kept when the processor is effectively utilizing them. Main memory is
associated with the processor, so moving instructions and information into and out of the processor is
extremely fast. Main memory is also known as RAM (Random Access Memory). This memory is
volatile. RAM loses its data when a power interruption occurs.
Main Memory

What is Memory Management?


In a multiprogramming computer, the Operating System resides in a part of memory, and the rest is
used by multiple processes. The task of subdividing the memory among different processes is called
Memory Management. Memory management is a method in the operating system to manage operations
between main memory and disk during process execution. The main aim of memory management is to
achieve efficient utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Now we are discussing the concept of Logical Address Space and Physical Address Space
Logical and Physical Address Space
 Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It is
also known as a Virtual address. Logical address space can be defined as the size of the process.
A logical address can be changed.
 Physical Address Space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”.
A Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is done
by a hardware device Memory Management Unit(MMU). The physical address always remains
constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different types of loading :
 Static Loading: Static Loading is basically loading the entire program into a fixed address. It
requires more memory space.
 Dynamic Loading: The entire program and all data of a process must be in physical memory for
the process to execute. So, the size of a process is limited to the size of physical memory. To gain
proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded
until it is called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused routine is never loaded. This loading is useful
when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more object files
generated by a compiler and combines them into a single executable file.
 Static Linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only
static linking, in which system language libraries are treated like any other object module.
 Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading.
In dynamic linking, “Stub” is included for each appropriate library routine reference. A stub is a
small piece of code. When the stub is executed, it checks whether the needed routine is already in
memory or not. If not available then the program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a
process temporarily into a secondary memory from the main memory, which is fast compared to
secondary memory. A swapping allows more processes to be run and can be fit into memory at one
time. The main part of swapping is transferred time and the total time is directly proportional to the
amount of memory swapped. Swapping is also known as roll-out, or roll because if a higher priority
process arrives and wants service, the memory manager can swap out the lower priority process and
then load and execute the higher priority process. After finishing higher priority work, the lower priority
process swapped back in memory and continued to the execution process.

swapping in memory management

Memory Management with Monoprogramming (Without Swapping)


This is the simplest memory management approach the memory is divided into two sections:
 One part of the operating system
 The second part of the user program
Fence Register

operating system user program

 In this approach, the operating system keeps track of the first and last location available for the
allocation of the user program
 The operating system is loaded either at the bottom or at top
 Interrupt vectors are often loaded in low memory therefore, it makes sense to load the operating
system in low memory
 Sharing of data and code does not make much sense in a single process environment
 The Operating system can be protected from user programs with the help of a fence register.
Advantages of Memory Management
 It is a simple management approach
Disadvantages of Memory Management
 It does not support multiprogramming
 Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
 A memory partition scheme with a fixed number of partitions was introduced to support
multiprogramming. this scheme is based on contiguous allocation
 Each partition is a block of contiguous memory
 Memory is partitioned into a fixed number of partitions.
 Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for updating the
system the remaining four partitions are for the user program.
Fixed Size Partitioning
Operating System

p1

p2

p3

p4

Partition Table
Once partitions are defined operating system keeps track of the status of memory partitions it is done
through a data structure called a partition table.
Sample Partition Table
Starting Address of Partition Size of Partition Status

0k 200k allocated

200k 100k free

300k 150k free


Starting Address of Partition Size of Partition Status

450k 250k allocated

Logical vs Physical Address


An address generated by the CPU is commonly referred to as a logical address. the address seen by the
memory unit is known as the physical address. The logical address can be mapped to a physical address
by hardware with the help of a base register this is known as dynamic relocation of memory references.
Contiguous Memory Allocation
The main memory should accommodate both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the operating
system. The memory is usually divided into two partitions: one for the resident operating system and
one for the user processes. We normally need several user processes to reside in memory
simultaneously. Therefore, we need to consider how to allocate available memory to the processes that
are in the input queue waiting to be brought into memory. In adjacent memory allotment, each process
is contained in a single contiguous segment of memory.

Contiguous Memory Allocation

Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the
simplest methods for allocating memory is to divide memory into several fixed-sized partitions and
each partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the
number of partitions.
 Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available for
other processes.
 Fixed partition allocation: In this method, the operating system maintains a table that indicates
which parts of memory are available and which are occupied by processes. Initially, all memory is
available for user processes and is considered one large block of available memory. This available
memory is known as a “Hole”. When the process arrives and needs memory, we search for a hole
that is large enough to store this process. If the requirement is fulfilled then we allocate memory
to process, otherwise keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which concerns how to satisfy a
request of size n from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.
First Fit

Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A
(size of 25 KB), because the first two blocks did not have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we search
the entire list, unless the list is ordered by size.

Best Fit

Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable
hole for Process A(size 25KB). In this method, memory utilization is maximum as compared to other
memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest leftover
hole.
Worst Fit

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which
is 60KB. Inefficient memory utilization is a major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory, it
creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of
multiprogramming, we must reduce the waste of memory or fragmentation problems. In the operating
systems two types of fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the
process more than their requested size. Due to this some unused space is left over and creating an
internal fragmentation problem.Example: Suppose there is a fixed partitioning used for memory
allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now a new
process p4 of size 2MB comes and demands a block of memory. It gets a memory block of 3MB
but 1MB block of memory is a waste, and it can not be allocated to other processes too. This is
called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory block, but we can
not assign it to a process because blocks are not contiguous. Example: Suppose (consider the
above example) three processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB
respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively.
After allocating the process p1 process and the p2 process left 1MB and 2MB. Suppose a new
process p4 comes and demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external fragmentation. To
overcome the external fragmentation problem Compaction is used. In the compaction technique, all free
memory space combines and makes one large block. So, this space can be used by other processes
effectively.
Another possible solution to the external fragmentation is to allow the logical address space of the
processes to be noncontiguous, thus permitting a process to be allocated physical memory wherever the
latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non-contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the CPU.
 Logical Address Space or Virtual Address Space (represented in words or bytes): The set of
all logical addresses generated by a program.
 Physical Address (represented in bits): An address actually available on a memory unit.
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses.
Example:
 If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G = 230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 =
27 bits
 If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1 M =
220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 =
24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU) which
is a hardware device and this mapping is known as the paging technique.
 The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

Paging

The address generated by the CPU is divided into:


 Page Number(p): Number of bits required to represent the pages in Logical Address Space or
Page number
 Page Offset(d): Number of bits required to represent a particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Physical Address is divided into:
 Frame Number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number frame
 Frame Offset(d): Number of bits required to represent a particular word in a frame or frame size
of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers. But the usage
of the register for the page table is satisfactory only if the page table is small. If the page table contains
a large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast
look-up hardware cache.
 The TLB is an associative, high-speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously. If the item is
found, then the corresponding value is returned.

Page Map Table

Main memory access time = m


If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)

TLB Hit and Miss

Implementation of Contiguous Memory Management Techniques


Memory Management Techniques are basic techniques that are used in managing the memory in the
operating system. In this article, we will deal with the implementation of Continuous Memory
Management Techniques. Memory Management Techniques are classified broadly into two categories:
 Contiguous
 Non-contiguous
What is Contiguous Memory Management?
Contiguous memory allocation is a memory allocation strategy. As the name implies, we utilize this
technique to assign contiguous blocks of memory to each task. Thus, whenever a process asks to access
the main memory, we allocate a continuous segment from the empty region to the process based on its
size. In this technique, memory is allotted in a continuous way to the processes. Contiguous Memory
Management has two types:
 Fixed(or Static) Partition

 Variable(or Dynamic) Partitioning

Contiguous Memory Management Techniques


Below are two Contiguous Memory Management Techniques. Lets understand these in detail.
1. Fixed Partition Scheme
In the fixed partition scheme, memory is divided into fixed number of partitions. Fixed means number
of partitions are fixed in the memory. In the fixed partition, in every partition only one process will be
accommodated. Degree of multi-programming is restricted by number of partitions in the memory.
Maximum size of the process is restricted by maximum size of the partition. Every partition is associated
with the limit registers.
 Limit Registers: It has two limit:
 Lower Limit: Starting address of the partition.
 Upper Limit: Ending address of the partition.

Internal Fragmentation is found in fixed partition scheme. To overcome the problem of internal
fragmentation, instead of fixed partition scheme, variable partition scheme is used.
Disadvantages Fix partition scheme
 Maximum process size <= Maximum partition size.
 The degree of multiprogramming is directly proportional to the number of partitions.
 Internal fragmentation which is discussed above is present.
 If a process of 19kb wants to allocate and we have free space which is not continuous we are not
able to allocate the space.
2. Variable Partition Scheme
In the variable partition scheme, initially memory will be single continuous free block. Whenever the
request by the process arrives, accordingly partition will be made in the memory. If the smaller
processes keep on coming then the larger partitions will be made into smaller partitions.
 In variable partition schema initially, the memory will be full contiguous free block
 Memory divided into partitions according to the process size where process size will vary.
 One partition is allocated to each active partition.

External Fragmentation is found in variable partition scheme. To overcome the problem of external
fragmentation, compaction technique is used or non-contiguous memory management techniques are
used.

Solution of External Fragmentation


1. Compaction
Moving all the processes toward the top or towards the bottom to make free available memory in a
single continuous place is called compaction. Compaction is undesirable to implement because it
interrupts all the running processes in the memory.
Disadvantage of Compaction
 Page fault can occur.
 It consumes CPU time (overhead).
2. Non-contiguous memory allocation
1. Physical address space: Main memory (physical memory) is divided into blocks of the same size
called frames. frame size is defined by the operating system by comparing it with the size of the
process.
2. Logical Address space: Logical memory is divided into blocks of the same size called process
pages. page size is defined by hardware system and these pages are stored in the main memory
during the process in non-contiguous frames.
Advantages of Variable Partition Scheme
 Portion size = process size
 There is no internal fragmentation (which is the drawback of fixed partition schema).
 Degree of multiprogramming varies and is directly proportional to a number of processes.
Disadvantage Variable Partition Scheme
 External fragmentation is still there.
Advantages of Contiguous Memory Management
 It’s simple to monitor how many memory blocks are still available for use, which determines how
many more processes can be allocated RAM.
 Considering that the complete file can be read from the disc in a single session, contiguous
memory allocation offers good read performance.
 Contiguous allocation is simple to set up and functions well.
Disadvantages of Contiguous Memory Management
 Fragmentation is not a problem. Since new files can be written to the disk after older ones.
 To select the appropriate hole size while creating a new file, it needs know its final size.
 The extra space in the holes would need to be reduced or used once the disk is full.

Non-Contiguous Allocation in Operating System


Non-contiguous allocation, also known as dynamic or linked allocation, is a memory allocation
technique used in operating systems to allocate memory to processes that do not require a contiguous
block of memory. In this technique, each process is allocated a series of non-contiguous blocks of
memory that can be located anywhere in the physical memory.
Non-contiguous allocation involves the use of pointers to link the non-contiguous memory blocks
allocated to a process. These pointers are used by the operating system to keep track of the memory
blocks allocated to the process and to locate them during the execution of the process.
Fundamental Approaches of Implementing Non-Contiguous Memory Allocation
There are two fundamental approaches to implementing non-contiguous memory
allocation. Paging and Segmentation are the two ways that allow a process’s physical address space
to be non-contiguous. It has the advantage of reducing memory wastage but it increases the
overheads due to address translation. It slows the execution of the memory because time is consumed
in address translation.
 Paging
 Segmentation
What is Paging?
In paging, each process consists of fixed-size components called pages. The size of a page is defined
by the hardware of a computer, and the demarcation of pages is implicit in it. The memory can
accommodate an integral number of pages. It is partitioned into memory areas that have the same size
as a page, and each of these memory areas is considered separately for allocation to a page. This way,
any free memory area is exactly the same size as a page, so external fragmentation does not arise in
the system. Internal fragmentation can arise because the last page of a process is allocated a page-
size memory area even if it is smaller than a page in size.
Why Paging is Important?
In paging, the Operating system needs to maintain the table which is called the Page Table for each
process which contains the base address of each block that is acquired by the process in memory
space. In non-contiguous memory allocation, different parts of a process are allocated to different
places in Main Memory. Spanning is allowed which is not possible in other techniques like Dynamic
or Static Contiguous memory allocation. That’s why paging is needed to ensure effective memory
allocation. Paging is done to remove External Fragmentation.
What is Segmentation?
In segmentation, a programmer identifies components called segments in a process. A segment is a
logical entity in a program, e.g., a set of functions, data structures, or objects. Segmentation facilitates
the sharing of code, data, and program modules processes. However, segments have different sizes,
so the kernel has to use memory reuse techniques such as first-fit or best-fit allocation. Consequently,
external fragmentation can arise.
How Does Non-Contiguous Memory Allocation Work?
Here a process can be spanned across different spaces in the main memory in a non-consecutive
manner. Suppose process P of size 4KB. Consider main memory has two empty slots each of size
2KB. Hence total free space is, 2*2= 4 KB. In contiguous memory allocation, process P cannot be
accommodated as spanning is not allowed.
In contiguous allocation, space in memory should be allocated to the whole process. If not, then that
space remains unallocated. But in Non-Contiguous allocation, the process can be divided into
different parts hence filling the space in the main memory. In this example, process P can be
divided into two parts of equal size – 2KB. Hence one part of process P can be allocated to the first
2KB space of main memory and the other part of the process can be allocated to the second 2KB
space of main memory. The below diagram will explain in a better way:

But, in what manner we divide a process to allocate them into main memory is very important to
understand. The process is divided after analysing the number of empty spaces and their size in the
main memory. Then only we do divide our process. It is a very time-consuming process. Their number
as well as their sizes changing every time due to execution of already present processes in main
memory.
In order to avoid this time-consuming process, we divide our process in secondary memory in
advance before reaching the main memory for its execution. Every process is divided into various
parts of equal size called Pages. We also divide our main memory into different parts of equal size
called Frames. It is important to understand that:

Size of page in process = Size of frame in memory

Although their numbers can be different. Below diagram will make you understand it in a better

way: consider empty main memory having a size of each frame is 2 KB, and two processes P1 and

P2 are 2 KB each.
Resolvent main memory, In main memory first page of P1 will store after that first page of
P2 will store then second page of P1 will store and at last second page of P2 will store. Hence
processes are stored in main memory non contiguous manner.

Concluding, we can say that Paging allows the memory address space of a process to be
non-contiguous. Paging is more flexible as only pages of a process are moved. It allows more
processes to reside in main memory than Contiguous memory allocation.
Advantages of Non-Contiguous Allocation
 It reduces internal fragmentation since memory blocks can be allocated as needed, regardless of
their physical location.
 It allows processes to be allocated memory in a more flexible and efficient manner since
the operating system can allocate memory to a process wherever free memory is available.
Disadvantages of Non-Contiguous Allocation
 It can lead to external fragmentation, where the available memory is broken into small, non-
contiguous blocks, making it difficult to allocate large blocks of memory to a process.
 Additionally, the use of pointers to link memory blocks can introduce additional overhead,
leading to slower memory allocation and deallocation times.
Conclusion
In conclusion, non-contiguous allocation is a useful memory allocation technique in situations where
processes do not require a contiguous block of memory. It is commonly used in operating systems,
such as Unix and Linux, where processes often require variable amounts of memory that are not
contiguous. It allows processes to use memory more efficiently by breaking them into smaller parts
that can be stored in different locations. This method helps reduce memory waste and makes better
use of available space. Although it adds some complexity in managing memory, it offers flexibility
and improved performance by minimizing fragmentation and optimizing memory usage.

Compaction in Operating System


Compaction is a technique to collect all the free memory present in the form of fragments into one
large chunk of free memory, which can be used to run other processes.
It does that by moving all the processes towards one end of the memory and all the available free
space towards the other end of the memory so that it becomes contiguous.
It is not always easy to do compaction. Compaction can be done only when the relocation is dynamic
and done at execution time. Compaction can not be done when relocation is static and is performed
at load time or assembly time.
For students preparing for competitive exams like GATE , understanding memory management
techniques such as compaction is crucial. Our GATE course offers detailed coverage of these
concepts, including practical examples and scenarios, to help you grasp the intricacies of dynamic
and static relocation and their implications for memory management.
Before Compaction
Before compaction, the main memory has some free space between occupied space. This condition
is known as external fragmentation . Due to less free space between occupied spaces, large processes
cannot be loaded into them.
Main Memory

Occupied space

Free space

Occupied space

Occupied space

Free space

After Compaction
After compaction, all the occupied space has been moved up and the free space at the bottom. This
makes the space contiguous and removes external fragmentation. Processes with large memory
requirements can be now loaded into the main memory.

Main Memory

Occupied space

Occupied space

Occupied space
Occupied space

Free space

Free space

Purpose of Compaction in Operating System


While allocating memory to process, the operating system often faces a problem when there’s a
sufficient amount of free space within the memory to satisfy the memory demand of a process.
however the process’s memory request can’t be fulfilled because the free memory available is in
a non-contiguous manner, this problem is referred to as external fragmentation. To solve such kinds
of problems compaction technique is used.
Issues with Compaction
Although the compaction technique is very useful in making memory utilization efficient and reduces
external fragmentation of memory, the problem with it is that a large amount of time is wasted in the
process and during that time the CPU sits idle hence reducing the efficiency of the system.
Advantages of Compaction
 Reduces external fragmentation.
 Make memory usage efficient.
 Memory becomes contiguous.
 Since memory becomes contiguous more processes can be loaded to memory, thereby
increasing scalability of OS.
 Fragmentation of file system can be temporarily removed by compaction.
 Improves memory utilization as their is less gap between memory blocks.
Disadvantages of Compaction
 System efficiency reduces and latency is increased.
 A huge amount of time is wasted in performing compaction.
 CPU sits idle for a long time.
 Not always easy to perform compaction.
 It may cause deadlocks since it disturbs the memory allocation process.
FAQs on Compaction
1. Is compaction used in real time operating system?
Yes, it can be but needs to avoid time violations.
2. Does compaction occupy additional disk space?
It acquires temporary storage space.
3. What are some alternatives to memory compaction for avoiding external fragmentation?
fixed-size allocation, memory pools, dynamic allocation.
4. Can a user initiate memory compaction manually?
Some systems provide this feature to optimize memory usage.

BEST-Fit Allocation in Operating System


INTRODUCTION:

Best-Fit Allocation is a memory allocation technique used in operating systems to allocate memory to
a process. In Best-Fit, the operating system searches through the list of free blocks of memory to find
the block that is closest in size to the memory request from the process. Once a suitable block is
found, the operating system splits the block into two parts: the portion that will be allocated to the
process, and the remaining free block.
Advantages of Best-Fit Allocation include improved memory utilization, as it allocates the smallest
block of memory that is sufficient to accommodate the memory request from the process.
Additionally, Best-Fit can also help to reduce memory fragmentation, as it tends to allocate smaller
blocks of memory that are less likely to become fragmented.
Disadvantages of Best-Fit Allocation include increased computational overhead, as the search for the
best-fit block of memory can be time-consuming and requires a more complex search algorithm.
Additionally, Best-Fit may also result in increased fragmentation, as it may leave smaller blocks of
memory scattered throughout the memory space.
Overall, Best-Fit Allocation is a widely used memory allocation technique in operating systems, but
its effectiveness may vary depending on the specifics of the system and the workload being executed.
For both fixed and dynamic memory allocation schemes, the operating system must keep list of each
memory location noting which are free and which are busy. Then as new jobs come into the system,
the free partitions must be allocated.
These partitions may be allocated by 4 ways:

1. First-Fit Memory Allocation


2. Best-Fit Memory Allocation
3. Worst-Fit Memory Allocation
4. Next-Fit Memory Allocation
These are Contiguous memory allocation techniques.
Best-Fit Memory Allocation:
This method keeps the free/busy list in order by size – smallest to largest. In this method, the
operating system first searches the whole of the memory according to the size of the given job and
allocates it to the closest-fitting free partition in the memory, making it able to use memory
efficiently. Here the jobs are in the order from smallest job to largest job.

As illustrated in above figure, the operating system first search throughout the memory and allocates
the job to the minimum possible memory partition, making the memory allocation efficient.
Advantages of Best-Fit Allocation :
 Memory Efficient. The operating system allocates the job minimum possible space in the
memory, making memory management very efficient.
 To save memory from getting wasted, it is the best method.
 Improved memory utilization
 Reduced memory fragmentation
 Minimizes external fragmentation

Disadvantages of Best-Fit Allocation :


 It is a Slow Process. Checking the whole memory for each job makes the working of the operating
system very slow. It takes a lot of time to complete the work.
 Increased computational overhead
 May lead to increased internal fragmentation
 Can result in slow memory allocation times.

Best-fit allocation is a memory allocation algorithm used in operating systems to allocate memory to
processes. In this algorithm, the operating system searches for the smallest free block of memory that
is big enough to accommodate the process being allocated memory.
Here is a brief overview of the best-fit allocation algorithm:
1. The operating system maintains a list of all free memory blocks available in the system.
2. When a process requests memory, the operating system searches the list for the smallest free
block of memory that is large enough to accommodate the process.
3. If a suitable block is found, the process is allocated memory from that block.
4. If no suitable block is found, the operating system can either wait until a suitable block becomes
available or request additional memory from the system.
5. The best-fit allocation algorithm has the advantage of minimizing external fragmentation, as it
searches for the smallest free block of memory that can accommodate a process. However, it can
also lead to more internal fragmentation, as processes may not use the entire memory block
allocated to them.
Overall, the best-fit allocation algorithm can be an effective way to allocate memory in an operating
system, but it is important to balance the advantages and disadvantages of this approach with other
allocation algorithms such as first-fit, next-fit, and worst-fit.
Worst-Fit Allocation in Operating Systems
\For both fixed and dynamic memory allocation schemes, the operating system
must keep a list of each memory location noting which are free and which are
busy. Then as new jobs come into the system, the free partitions must be
allocated.
These partitions may be allocated in 4 ways:
1. First-Fit Memory Allocation
2. Best-Fit Memory Allocation
3. Worst-Fit Memory Allocation
4. Next-Fit Memory Allocation
These are Contiguous memory allocation techniques.
Worst-Fit Memory Allocation :
In this allocation technique, the process traverses the whole memory and
always search for the largest hole/partition, and then the process is placed in
that hole/partition. It is a slow process because it has to traverse the entire
memory to search the largest hole.
Worst-fit allocation tends to leave large blocks of memory unused, but it has
specific use cases in system design. For a deeper dive into memory
management and operating systems, the GATE CS Self-Paced Course covers
these topics comprehensively.
Here is an example to understand Worst Fit-Allocation –

Here Process P1=30K is allocated with the Worst Fit-Allocation technique, so


it traverses the entire memory and selects memory size 400K which is the
largest, and hence there is an internal fragmentation of 370K which is very
large and so many other processes can also utilize this leftover space.
Advantages of Worst-Fit Allocation :
Since this process chooses the largest hole/partition, therefore there will be
large internal fragmentation. Now, this internal fragmentation will be quite big
so that other small processes can also be placed in that leftover partition.
Disadvantages of Worst-Fit Allocation :
It is a slow process because it traverses all the partitions in the memory and
then selects the largest partition among all the partitions, which is a time-
consuming process.
First-Fit Allocation in Operating Systems

RODUCTION:

First-Fit Allocation is a memory allocation technique used in operating


systems to allocate memory to a process. In First-Fit, the operating system
searches through the list of free blocks of memory, starting from the beginning
of the list, until it finds a block that is large enough to accommodate the
memory request from the process. Once a suitable block is found, the
operating system splits the block into two parts: the portion that will be
allocated to the process, and the remaining free block.
Advantages of First-Fit Allocation include its simplicity and efficiency, as the
search for a suitable block of memory can be performed quickly and easily.
Additionally, First-Fit can also help to minimize memory fragmentation, as it
tends to allocate memory in larger blocks.
Disadvantages of First-Fit Allocation include poor performance in situations
where the memory is highly fragmented, as the search for a suitable block of
memory can become time-consuming and inefficient. Additionally, First-Fit
can also lead to poor memory utilization, as it may allocate larger blocks of
memory than are actually needed by a process.
Overall, First-Fit Allocation is a widely used memory allocation technique in
operating systems, but its effectiveness may vary depending on the specifics of
the system and the workload being executed.
For both fixed and dynamic memory allocation schemes, the operating system
must keep list of each memory location noting which are free and which are
busy. Then as new jobs come into the system, the free partitions must be
allocated. These partitions may be allocated by 4 ways:
1. First-Fit Memory Allocation
2. Best-Fit Memory Allocation
3. Worst-Fit Memory Allocation
4. Next-Fit Memory Allocation
These are Contiguous memory allocation techniques. First-Fit Memory
Allocation: This method keeps the free/busy list of jobs organized by memory
location, low-ordered to high-ordered memory. In this method, first job claims
the first available memory with space more than or equal to it’s size. The
operating system doesn’t search for appropriate partition but just allocate the
job to the nearest memory partition available with sufficient size.
As illustrated above, the system assigns J1 the nearest partition in the memory.
As a result, there is no partition with sufficient space is available for J3 and it
is placed in the waiting list.The processor ignores if the size of partition
allocated to the job is very large as compared to the size of job or not. It just
allocates the memory. As a result, a lot of memory is wasted and many jobs
may not get space in the memory, and would have to wait for another job to
complete.

ADVANTAGES OR DISADVANTAGES:

Advantages of First-Fit Allocation in Operating Systems:

1. Simple and efficient search algorithm


2. Minimizes memory fragmentation
3. Fast allocation of memory

Disadvantages of First-Fit Allocation in Operating Systems:

1. Poor performance in highly fragmented memory


2. May lead to poor memory utilization
3. May allocate larger blocks of memory than required
Fixed (or static) Partitioning in Operating System

Fixed (or static) Partitioning in Operating System


Fixed partitioning, also known as static partitioning, is one of the earliest memory
management techniques used in operating systems. In this method, the main
memory is divided into a fixed number of partitions at system startup, and each
partition is allocated to a process. These partitions remain unchanged throughout
system operation, ensuring a simple, predictable memory allocation process.
Despite its simplicity, fixed partitioning has several limitations, such as internal
fragmentation and inflexible handling of varying process sizes. This article delves
into the advantages, disadvantages, and applications of fixed partitioning in
modern operating systems.

What is Fixed (or static) Partitioning in the Operating System?


Fixed (or static) partitioning is one of the earliest and simplest memory
management techniques used in operating systems. It involves dividing the main
memory into a fixed number of partitions at system startup, with each partition
being assigned to a process. These partitions remain unchanged throughout the
system’s operation, providing each process with a designated memory space. This
method was widely used in early operating systems and remains relevant in
specific contexts like embedded systems and real-time applications. However,
while fixed partitioning is simple to implement, it has significant limitations,
including inefficiencies caused by internal fragmentation.
1. In fixed partitioning, the memory is divided into fixed-size chunks, with
each chunk being reserved for a specific process. When a process requests
memory, the operating system assigns it to the appropriate partition. Each
partition is of the same size, and the memory allocation is done at system
boot time.
2. Fixed partitioning has several advantages over other memory allocation
techniques. First, it is simple and easy to implement. Second, it is
predictable, meaning the operating system can ensure a minimum amount of
memory for each process. Third, it can prevent processes from interfering
with each other’s memory space, improving the security and stability of the
system.
3. However, fixed partitioning also has some disadvantages. It can lead to
internal fragmentation, where memory in a partition remains unused. This
can happen when the process’s memory requirements are smaller than the
partition size, leaving some memory unused. Additionally, fixed partitioning
limits the number of processes that can run concurrently, as each process
requires a dedicated partition.
Overall, fixed partitioning is a useful memory allocation technique in situations
where the number of processes is fixed, and the memory requirements for each
process are known in advance. It is commonly used in embedded systems, real-
time systems, and systems with limited memory resources.
In operating systems, Memory Management is the function responsible for
allocating and managing a computer’s main memory. Memory
Management function keeps track of the status of each memory location, either
allocated or free to ensure effective and efficient use of Primary Memory.

There are two Memory Management Techniques:


1. Contiguous
2. Non-Contiguous
Contiguous Memory Allocation:
In contiguous memory allocation, each process is assigned a single continuous
block of memory in the main memory. The entire process is loaded into one
contiguous memory region.
In Contiguous Technique, executing process must be loaded entirely in the main
memory.
Contiguous Technique can be divided into:
 Fixed (or static) partitioning
 Variable (or dynamic) partitioning

Fixed Partitioning:

This is the oldest and simplest technique used to put more than one process in the
main memory. In this partitioning, the number of partitions (non-overlapping)
in RAM is fixed but the size of each partition may or may not be the same. As
it is a contiguous allocation, hence no spanning is allowed. Here partitions are
made before execution or during system configure.
As illustrated in above figure, first process is only consuming 1MB out of 4MB
in the main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB.

Suppose process P5 of size 7MB comes. But this process cannot be


accommodated in spite of available free space because of contiguous allocation
(as spanning is not allowed). Hence, 7MB becomes part of External
Fragmentation.
Advantages of Fixed Partitioning
 Easy to implement: The algorithms required are simple and straightforward.
 Low overhead: Requires minimal system resources to manage, ideal for
resource-constrained systems.
 Predictable: Memory allocation is predictable, with each process receiving a
fixed partition.
 No external fragmentation: Since the memory is divided into fixed
partitions and no spanning is allowed, external fragmentation is avoided.
 Suitable for systems with a fixed number of processes: Ideal for systems
where the number of processes and their memory requirements are known in
advance.
 Prevents process interference: Ensures that processes do not interfere with
each other’s memory, improving system stability.
 Efficient memory use: Particularly in systems with fixed, known processes
and batch processing scenarios.
 Good for batch processing: Works well in environments where the number
of processes remains constant over time.
 Better control over memory allocation: The operating system has clear
control over how memory is allocated and managed.
 Easy to debug: Fixed Partitioning is easy to debug since the size and
location of each process are predetermined.
Disadvantages of Fixed Partitioning
1. Internal Fragmentation: Main memory use is inefficient. Any program, no
matter how small, occupies an entire partition. This can cause internal
fragmentation.

2. Limit process size: Process of size greater than the size of the partition in
Main Memory cannot be accommodated. The partition size cannot be varied
according to the size of the incoming process size. Hence, the process size of
32MB in the above-stated example is invalid.

3. Limitation on Degree of Multiprogramming: Partitions in Main Memory


are made before execution or during system configure. Main Memory is
divided into a fixed number of partitions. Suppose if there are partitions in
RAM and are the number of processes, then n2<=n1 n2<=n1 condition
must be fulfilled. Number of processes greater than the number of partitions
in RAM is invalid in Fixed Partitioning.
Clarification:
Internal fragmentation is a notable disadvantage in fixed partitioning, whereas
external fragmentation is not applicable because processes cannot span across
multiple partitions, and memory is allocated in fixed blocks.
Non-Contiguous Memory Allocation:
In non-contiguous memory allocation, a process is divided into multiple blocks
or segments that can be loaded into different parts of the memory, rather than
requiring a single continuous block.
Key Features:
Divided memory blocks: A process is divided into smaller chunks (pages,
segments) and placed in available memory blocks, which can be located
anywhere in the memory.
Paging and Segmentation:
 Paging: Divides memory into fixed-size blocks called pages. Pages of a
process can be placed in any available memory frames.
 Segmentation: Divides memory into variable-sized segments based on
logical sections of a program, like code, data, and stack.
Conclusion
Fixed partitioning, though straightforward and easy to manage, presents several
challenges, particularly in the form of internal fragmentation and limited
flexibility in handling varying process sizes. This memory allocation technique
works well in environments where memory requirements are predictable and
stable. However, for modern systems with dynamic workloads and varying
memory demands, more flexible techniques like dynamic partitioning or non-
contiguous allocation methods have become preferable. Nonetheless,
understanding fixed partitioning is crucial for grasping the evolution of memory
management in operating systems and its applications in specialized
environments.

Questions for Practice:

1.Recall the function of swapping in an operating system.

2 Identify the pros and cons of using compaction in memory management.

3.Differentiate logical address and physical address in memory management.


4.Point out the reasons for using Memory management.

5.Explain how locality of reference impacts memory performance.

6.Analyze the function of the Page Table Base Register (PTBR) in the paging
mechanism of an operating system.
7.Summarize the function of demand paging in virtual memory.
8.Differentiate between internal and external fragmentation in terms of their
impact on memory usage.
9.Assess whether variable partition memory allocation is better than fixed partition
for certain applications.
10.Given free memory partitions of 150 K, 400 K, 250 K, 350 K, and 700 K (in
order), how would each of the First-fit, Best-fit, and Worst-fit algorithms place
processes of 175 K, 300 K, 125 K, and 500 K (in order)?

11,Describe the concept of Virtual Memory and outline the procedure for
translating virtual addresses into physical addresses, including a clear diagram.

12.Design an example to demonstrate the benefits of memory compaction in a


system.

13.Summarize how segmentation differs from paging.


14.Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. The number of frames in
the memory is 3. Analyze the number of page faults respective to:
1. Optimal Page Replacement Algorithm
2. FIFO Page Replacement Algorithm

15. Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page


frame.
Analyze number of page fault with proper explanation using Optimal page
replacement algorithm.

16.Consider a main memory with five page frames and the following sequence of
page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. which one of the following
is true with respect to page replacement policies First-In-First-out (FIFO) and
Least Recently Used (LRU)? Analyze number of page fault and give answer that
both incur the same number of page fault or different?

17.Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2


with 4 page frames. Find number of page faults using FIFO and LRU page
replacement techniques.

18.Compare the efficiency of various page replacement algorithms in managing


virtual memory.

19.Differentiate between contiguous and non-contiguous memory allocation with


examples.

20.You have the following free memory partitions (in kilobytes): 80 K, 300 K, 150
K, 500 K, and 250 K. You need to allocate processes of sizes 90 K, 210 K, 120 K,
and 400 K.
Calculate how each of the following memory allocation techniques would place
the processes into the partitions:

First-fit
Best-fit
Worst-fit

21.Design an example of variable partitioning that reduces internal fragmentation,


and then analyze the situation when it may cause external
fragmentation.Investigate how different page replacement algorithms impact
system performance under heavy load.

You might also like