Memory Management in Os
Memory Management in Os
Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In
this technique, the partition size is not declared initially. It is declared at the time of
process loading.
The first partition is reserved for the operating system. The remaining space is
divided into parts. The size of each partition will be equal to the size of the process.
The partition size varies according to the need of the process so that the internal
fragmentation can be avoided.
First Fit algorithm scans the linked list and whenever it finds the first big enough hole
to store a process, it stops scanning and load the process into that hole. This
procedure produces two partitions. Out of them, one partition will be a hole while the
other partition will store the process.
First Fit algorithm maintains the linked list according to the increasing order of
starting index. This is the simplest to implement among all the algorithms and
produces bigger holes as compare to the other algorithms.
2. Next Fit Algorithm
Next Fit algorithm is similar to First Fit algorithm except the fact that, Next fit scans
the linked list from the node where it previously allocated a hole.
Next fit doesn't scan the whole list, it starts scanning the list from the next node. The
idea behind the next fit is the fact that the list has been scanned once therefore the
probability of finding the hole is larger in the remaining part of the list.
Experiments over the algorithm have shown that the next fit is not better then the first
fit. So it is not being used these days in most of the cases.
3. Best Fit Algorithm
The Best Fit algorithm tries to find out the smallest hole possible in the list that can
accommodate the size requirement of the process.
Using Best Fit has some disadvantages.
1. 1. It is slower because it scans the entire list every time and tries to find out the
smallest hole which can satisfy the requirement the process.
2. Due to the fact that the difference between the whole size and the process size is
very small, the holes produced will be as small as it cannot be used to load any
process and therefore it remains useless.
Despite of the fact that the name of the algorithm is best fit, It is not the best algorithm
among all.
The worst fit algorithm scans the entire list every time and tries to find out the biggest
hole in the list which can fulfill the requirement of the process.
Despite of the fact that this algorithm produces the larger holes to load the other
processes, this is not the better approach due to the fact that it is slower because it
searches the entire list every time again and again.
5. Quick Fit Algorithm
The quick fit algorithm suggestsmaintaining the different lists of frequently used
sizes. Although, it is not practically suggestible because the procedure takes so
much time to create the different lists and then expending the holes to load a
process.
The first fit algorithm is the best algorithm among all because
2. It produces bigger holes that can be used to load other processes later on.
3. It is easiest to implement.
Compaction
We got to know that the dynamic partitioning suffers from external fragmentation.
However, this can cause some serious problems.
To avoid compaction, we need to change the rule which says that the process can't
be stored in the different places in the memory.
We can also use compaction to minimize the probability of external fragmentation. In
compaction, all the free partitions are made contiguous and all the loaded partitions
are brought together.
By applying this technique, we can store the bigger processes in the memory. The
free partitions are merged which can now be allocated according to the needs of new
processes. This technique is also called defragmentation.
As shown in the image above, the process P5, which could not be loaded into the
memory due to the lack of contiguous space, can be loaded now in the memory
since the free partitions are made contiguous.
Problem with Compaction
The efficiency of the system is decreased in the case of compaction due to the fact
that all the free spaces will be transferred from several places to a single place.
Huge amount of time is invested for this procedure and the CPU will remain idle for
all this time. Despite of the fact that the compaction avoids external fragmentation, it
makes system inefficient.
Let us consider that OS needs 6 NS to copy 1 byte from one place to another.
1. 1 B transfer needs 6 NS
2. 256 MB transfer needs 256 X 2^20 X 6 X 10 ^ -9 secs
hence, it is proved to some extent that the larger size memory transfer needs some
huge amount of time that is in seconds.
Need for Paging
Disadvantage of Dynamic Partitioning
The main disadvantage of Dynamic Partitioning is External fragmentation. Although,
this can be removed by Compaction but as we have discussed earlier, the
compaction makes the system inefficient.
We need to find out a mechanism which can load the processes in the partitions in a
more optimal way. Let us discuss a dynamic and flexible mechanism called paging.
Need for Paging
Lets consider a process P1 of size 2 MB and the main memory which is divided into
three partitions. Out of the three partitions, two partitions are holes of size 1 MB
each.
P1 needs 2 MB space in the main memory to be loaded. We have two holes of 1 MB
each but they are not contiguous.
Although, there is 2 MB space available in the main memory in the form of those
holes but that remains useless until it become contiguous. This is a serious problem
to address.
We need to have some kind of mechanism which can store one process at different
locations of the memory.
The Idea behind paging is to divide the process in pages so that, we can store them
in the memory at different holes. We will discuss paging with the examples in the
next sections.
these n bits can be divided into two parts, that are, K bits and (n-k) bits.
Physical and Logical Address Space
Physical Address Space
Physical address space in a system can be defined as the size of the main memory.
It is really important to compare the process size with the physical address space.
The process size must be less than the physical address space.
Let us consider,
word size = 8 Bytes = 2 ^ 3 Bytes
Hence,
Physical address space (in words) = (2 ^ 16) / (2 ^ 3) = 2 ^ 13 Words
Therefore,
Physical Address = 13 bits
In General,
If, Physical Address Space = N Words
In general,
If, logical address space = L words
Then, Logical Address = Log2L bits
What is a Word?
The Word is the smallest unit of the memory. It is the collection of bytes. Every
operating system defines different word sizes after analyzing the n-bit address that is
inputted to the decoder and the 2 ^ n memory locations that are produced from the
decoder.
Page Table in OS
Page Table is a data structure used by the virtual memory system to store the
mapping between logical addresses and physical addresses.
Logical addresses are generated by the CPU for the pages of the processes
therefore they are generally used by the processes.
Physical addresses are the actual frame address of the memory. They are generally
used by the hardware or more specifically by RAM subsystems.
The image given below considers,
Physical Address Space = M words
Logical Address Space = L words
Page Size = P words
The CPU always accesses the processes through their logical addresses. However,
the main memory recognizes physical address only.
In this situation, a unit named as Memory Management Unit comes into the picture. It
converts the page number of the logical address to the frame number of the physical
address. The offset remains same in both the addresses.
To perform this task, Memory Management unit needs a special kind of mapping
which is done by page table. The page table stores all the Frame numbers
corresponding to the page numbers of the page table.
In other words, the page table maps the page number to its actual location (frame
number) in the memory.
In the image given below shows, how the required word of the frame is accessed
with the help of offset.
Mapping from page table to main memory
In operating systems, there is always a requirement of mapping from logical address
to the physical address. However, this process involves various steps which are
defined as follows.
1. Generation of logical address
CPU generates logical address for each page of the process. This contains two
parts: page number and offset.
2. Scaling
To determine the actual page number of the process, CPU stores the page table
base in a special register. Each time the address is generated, the value of the page
table base is added to the page number to get the actual location of the page entry in
the table. This process is called scaling.
3. Generation of physical Address
The frame number of the desired page is determined by its entry in the page table. A
physical address is generated which also contains two parts : frame number and
offset. The Offset will be similar to the offset of the logical address therefore it will be
copied from the logical address.
4. Getting Actual Frame Number
The frame number and the offset from the physical address is mapped to the main
memory in order to get the actual word address.
Page Table Entry
Along with page frame number, the page table also contains some of the bits
representing the extra information regarding the page.
Let's see what the each bit represents about the page.
1. Caching Disabled
Sometimes, there are differences between the information closest to the CPU and
the information closest to the user. Operating system always wants CPU to access
user's data as soon as possible. CPU accesses cache which can be inaccurate in
some of the cases, therefore, OS can disable the cache for the required pages. This
bit is set to 1 if the cache is disabled.
2. Referenced
There are variouspage replacement algorithms which will be covered later in this
tutorial. This bit is set to 1 if the page is referred in the last clock cycle otherwise it
remains 0.
3. Modified
This bit will be set if the page has been modified otherwise it remains 0.
4. Protection
The protection field represents the protection level which is applied on the page. It
can be read only or read & write or execute. We need to remember that it is not a bit
rather it is a field which contains many bits.
5. Present/Absent
In the concept of demand paging, all the pages doesn't need to be present in the
main memory Therefore, for all the pages that are present in the main memory, this
bit will be set to 1 and the bit will be 0 for all the pages which are absent.
If some page is not present in the main memory then it is called page fault.
We can save this wastage by just inverting the page table. We can save the details
only for the pages which are present in the main memory. Frames are the indices
and the information saved inside the block will be Process ID and page number.
Page Replacement Algorithms in
Operating Systems (OS)
Today we are going to learn about Page Replacement Algorithms in Operating
Systems (OS). Before knowing about Page Replacement Algorithms in Operating
Systems let us learn about Paging in Operating Systems and also a little about
Virtual Memory.
Only after understanding the concept of Paging we will understand about Page
Replacement Algorithms.
Paging in Operating Systems (OS)
Paging is a storage mechanism. Paging is used to retrieve processes from
secondary memory to primary memory.
The main memory is divided into small blocks called pages. Now, each of the pages
contains the process which is retrieved into main memory and it is stored in one
frame of memory.
It is very important to have pages and frames which are of equal sizes which are
very useful for mapping and complete utilization of memory.
Virtual Memory in Operating Systems (OS)
A storage method known as virtual memory gives the user the impression that their
main memory is quite large. By considering a portion of secondary memory as the
main memory, this is accomplished.
By giving the user the impression that there is memory available to load the process,
this approach allows them to load larger size programs than the primary memory that
is accessible.
The Operating System loads the many components of several processes in the main
memory as opposed to loading a single large process there.
By doing this, the level of multiprogramming will be enhanced, which will increase
CPU consumption.
Demand Paging
The Demand Paging is a condition which is occurred in the Virtual Memory. We
know that the pages of the process are stored in secondary memory. The page is
brought to the main memory when required. We do not know when this requirement
is going to occur. So, the pages are brought to the main memory when required by
the Page Replacement Algorithms.
So, the process of calling the pages to main memory to secondary memory upon
demand is known as Demand Paging.
The important jobs of virtual memory in Operating Systems are two. They are:
o Frame Allocation
o Page Replacement.
Frame Allocation in Virtual Memory
Demand paging is used to implement virtual memory, an essential component of
operating systems. A page-replacement mechanism and a frame allocation algorithm
must be created for demand paging. If you have numerous processes, frame
allocation techniques are utilized to determine how many frames to provide to each
process.
A Physical Address is required by the Central Processing Unit (CPU) for the frame
creation and the physical Addressing provides the actual address to the frame
created. For each page a frame must be created.
Frame Allocation Constraints
o The Frames that can be allocated cannot be greater than total number of frames.
o Each process should be given a set minimum amount of frames.
o When fewer frames are allocated then the page fault ration increases and the
process execution becomes less efficient
o There ought to be sufficient frames to accommodate all the many pages that a single
instruction may refer to
Frame Allocation Algorithms
There are three types of Frame Allocation Algorithms in Operating Systems. They
are:
1) Equal Frame Allocation Algorithms
Here, in this Frame Allocation Algorithm we take number of frames and number of
processes at once. We divide the number of frames by number of processes. We get
the number of frames we must provide for each process.
This means if we have 36 frames and 6 processes. For each process 6 frames are
allocated.
It is not very logical to assign equal frames to all processes in systems with
processes of different sizes. A lot of allocated but unused frames will eventually be
wasted if a lot of frames are given to a little operation.
2) Proportionate Frame Allocation Algorithms
Here, in this Frame Allocation Algorithms we take number of frames based on the
process size. For big process more number of frames is allocated. For small
processes less number of frames is allocated by the operating system.
The problem in the Proportionate Frame Allocation Algorithm is number of frames
are wasted in some rare cases.
The advantage in Proportionate Frame Allocation Algorithm is that instead of equally,
each operation divides the available frames according to its demands.
3) Priority Frame Allocation Algorithms
According to the quantity of frame allocations and the processes, priority frame
allocation distributes frames. Let's say a process has a high priority and needs more
frames; in such case, additional frames will be given to the process. Processes with
lower priorities are then later executed in future and first only high priority processes
are executed first.
Page Replacement Algorithms
There are three types of Page Replacement Algorithms. They are:
o Optimal Page Replacement Algorithm
o First In First Out Page Replacement Algorithm
o Least Recently Used (LRU) Page Replacement Algorithm
First in First out Page Replacement Algorithm
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual
problem starts. The fixed number of frames is filled up with the help of first frames
present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the
frame. If the frame is present then, no problem is occurred. Because of the page
which is to be searched is already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as
Page Hit.
If the page to be searched is not found among the frames then, this process is
known as Page Fault.
When Page Fault occurs this problem arises, then the First In First Out Page
Replacement Algorithm comes into picture.
The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the
frame which is allotted long back. This means the useless page which is in the frame
for a longer time is removed and the new page which is in the ready queue and is
ready to occupy the frame is allowed by the First In First Out Page Replacement.
Let us understand this First In First Out Page Replacement Algorithm working with
the help of an example.
Example:
Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a
memory with three frames and calculate number of page faults by using FIFO (First
In First Out) Page replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit
Reference String:
Number of Page Hits = 8
Number of Page Faults = 12
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Hit Percentage = 8 *100 / 20 = 40%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy. So, with the help of First in
First Out Page Replacement Algorithm we remove the frame which contains the
page is older among the pages. By removing the older page we give access for the
new frame to occupy the empty space created by the First in First out Page
Replacement Algorithm.
OPTIMAL Page Replacement Algorithm
This is the second basic algorithm of Page Replacement Algorithms. This algorithm
is basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual
problem starts. The fixed number of frames is filled up with the help of first frames
present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the
frame. If the frame is present then, no problem is occurred. Because of the page
which is to be searched is already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as
Page Hit.
If the page to be searched is not found among the frames then, this process is
known as Page Fault.
When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement
Algorithm comes into picture.
The OPTIMAL Page Replacement Algorithms works on a certain principle. The
principle is:
Replace the Page which is not used in the Longest Dimension of time in future
This principle means that after all the frames are filled then, see the future pages
which are to occupy the frames. Go on checking for the pages which are already
available in the frames. Choose the page which is at last.
Example:
Suppose the Reference String is:
0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
6, 1, 2 are in the frames occupying the frames.
Now we need to enter 0 into the frame by removing one page from the page
So, let us check which page number occurs last
From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring page
number. So we can say that 0 can be placed in the frame body by removing 1 from
the frame.
Let us understand this OPTIMAL Page Replacement Algorithm working with the help
of an example.
Example:
Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0 for a
memory with three frames and calculate number of page faults by using OPTIMAL
Page replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit
Reference String:
Main Memory
Fence Register
operating user
system program
In this approach, the operating system keeps track of the first and last
location available for the allocation of the user program
The operating system is loaded either at the bottom or at top
Interrupt vectors are often loaded in low memory therefore, it makes sense
to load the operating system in low memory
Sharing of data and code does not make much sense in a single process
environment
The Operating system can be protected from user programs with the help of
a fence register.
Advantages of Memory Management
It is a simple management approach
Disadvantages of Memory Management
It does not support multiprogramming
Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
A memory partition scheme with a fixed number of partitions was introduced
to support multiprogramming. this scheme is based on contiguous
allocation
Each partition is a block of contiguous memory
Memory is partitioned into a fixed number of partitions.
Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is
reserved for updating the system the remaining four partitions are for the user
program.
Fixed Size Partitioning
Operating
System
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of
memory partitions it is done through a data structure called a partition table.
Sample Partition Table
allocate
0k 200k
d
allocate
450k 250k
d
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated
efficient manner. One of the simplest methods for allocating memory is to
divide memory into several fixed-sized partitions and each partition contains
exactly one process. Thus, the degree of multiprogramming is obtained by the
number of partitions.
Multiple partition allocation: In this method, a process is selected from the
input queue and loaded into the free partition. When the process
terminates, the partition becomes available for other processes.
Fixed partition allocation: In this method, the operating system maintains
a table that indicates which parts of memory are available and which are
occupied by processes. Initially, all memory is available for user processes
and is considered one large block of available memory. This available
memory is known as a “Hole”. When the process arrives and needs
memory, we search for a hole that is large enough to store this process. If
the requirement is fulfilled then we allocate memory to process, otherwise
keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes. There
are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process
allocated.
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole
that can store process A (size of 25 KB), because the first two blocks did not
have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process
requirements. For this, we search the entire list, unless the list is ordered by
size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole
25KB is the best suitable hole for Process A(size 25KB). In this method,
memory utilization is maximum as compared to other memory allocation
techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method
produces the largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest
available memory block which is 60KB. Inefficient memory utilization is a
major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after
execution from memory, it creates a small free hole. These holes can not be
assigned to new processes because holes are not combined or do not fulfill
the memory requirement of the process. To achieve a degree of
multiprogramming, we must reduce the waste of memory or fragmentation
problems. In the operating systems two types of fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory
blocks are allocated to the process more than their requested size. Due to
this some unused space is left over and creating an internal fragmentation
problem.Example: Suppose there is a fixed partitioning used for memory
allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in
memory. Now a new process p4 of size 2MB comes and demands a block
of memory. It gets a memory block of 3MB but 1MB block of memory is a
waste, and it can not be allocated to other processes too. This is called
internal fragmentation.
1. External fragmentation: In External Fragmentation, we have a free
memory block, but we can not assign it to a process because blocks are
not contiguous. Example: Suppose (consider the above example) three
processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB
respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB
allocated respectively. After allocating the process p1 process and the p2
process left 1MB and 2MB. Suppose a new process p4 comes and
demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called
external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by
external fragmentation. To overcome the external fragmentation problem
Compaction is used. In the compaction technique, all free memory space
combines and makes one large block. So, this space can be used by other
processes effectively.
Another possible solution to the external fragmentation is to allow the logical
address space of the processes to be noncontiguous, thus permitting a
process to be allocated physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a
contiguous allocation of physical memory. This scheme permits the physical
address space of a process to be non-contiguous.
Logical Address or Virtual Address (represented in bits): An address
generated by the CPU.
Logical Address Space or Virtual Address Space (represented in
words or bytes): The set of all logical addresses generated by a program.
Physical Address (represented in bits): An address actually available on
a memory unit.
Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses.
Example:
If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G
words (1 G = 230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical
Address = log2 227 = 27 bits
If Physical Address = 22 bits, then Physical Address Space = 222 words = 4
M words (1 M = 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical
Address = log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping is
known as the paging technique.
The Physical Address Space is conceptually divided into several fixed-size
blocks, called frames.
The Logical Address Space is also split into fixed-size blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Physical Address = 12 bits, then Physical Address Space = 4 K words
Logical Address = 13 bits, then Logical Address Space = 8 K words
Page size = frame size = 1 K words (assumption)
Paging
The address generated by the CPU is divided into:
Page Number(p): Number of bits required to represent the pages in Logical
Address Space or Page number
Page Offset(d): Number of bits required to represent a particular word in a
page or page size of Logical Address Space or word number of a page or
page offset.
Physical Address is divided into:
Frame Number(f): Number of bits required to represent the frame of
Physical Address Space or Frame number frame
Frame Offset(d): Number of bits required to represent a particular word in a
frame or frame size of Physical Address Space or word number of a frame
or frame offset.
The hardware implementation of the page table can be done by using
dedicated registers. But the usage of the register for the page table is
satisfactory only if the page table is small. If the page table contains a large
number of entries then we can use TLB(translation Look-aside buffer), a
special, small, fast look-up hardware cache.
The TLB is an associative, high-speed memory.
Each entry in TLB consists of two parts: a tag and a value.
When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then the corresponding value is
returned.