0% found this document useful (0 votes)
25 views78 pages

Os Unit 3

memory management : static and dynamic memory allocation ,contiguous memory allocation, non contiguous memory allocation ,paging and segmentation ,virtual memory management ,demand paging ,paging hardware , VM handler, page replacement policies-LRU,FIFO

Uploaded by

Yavanika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views78 pages

Os Unit 3

memory management : static and dynamic memory allocation ,contiguous memory allocation, non contiguous memory allocation ,paging and segmentation ,virtual memory management ,demand paging ,paging hardware , VM handler, page replacement policies-LRU,FIFO

Uploaded by

Yavanika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

UNIT 3

Memory Management
Main Memory Management Strategies

• Every program to be executed has to be executed must be in memory. The instruction


must be fetched from memory before it is executed.
• In multi-tasking OS memory management is complex, because as processes are swapped
in and out of the CPU, their code and data must be swapped in and out of memory.
Address Binding

• User programs typically refer to memory addresses with symbolic names. These symbolic
names must be mapped or bound to physical memory addresses.
• Address binding of instructions to memory-addresses can happen at 3 different stages.

1. Compile Time - If it is known at compile time where a program will reside in physical
memory, then absolute code can be generated by the compiler, containing actual physical
addresses. However, if the load address changes at some later time, then the program will
have to be recompiled.
2. Load Time - If the location at which a program will be loaded is not known at compile
time, then the compiler must generate relocatable code, which references addresses
relative to the start of the program. If that starting address changes, then the program must
be reloaded but not recompiled.
3. Execution Time - If a program can be moved around in memory during the course of its
execution, then binding must be delayed until execution time.
Figure: Multistep processing of a user program

Logical Versus Physical Address Space

• The address generated by the CPU is a logical address, whereas the memory address
where programs are actually stored is a physical address.
• The set of all logical addresses used by a program composes the logical address space,
and the set of all corresponding physical addresses composes the physical address space.
• The run time mapping of logical to physical addresses is handled by the memory-
management unit (MMU).
• One of the simplest is a modification of the base-register scheme.
• The base register is termed a relocation register
• The value in the relocation-register is added to every address generated by a
user-process at the time it is sent to memory.
• The user-program deals with logical-addresses; it never sees the real physical-
addresses.

Figure: Dynamic relocation using a relocation-register


Dynamic Loading

• This can be used to obtain better memory-space utilization.


• A routine is not loaded until it is called.

This works as follows:


1. Initially, all routines are kept on disk in a relocatable-load format.
2. Firstly, the main-program is loaded into memory and is executed.
3. When a main-program calls the routine, the main-program first checks to see whether the
routine has been loaded.
4. If routine has been not yet loaded, the loader is called to load desired routine into
memory.
5. Finally, control is passed to the newly loaded-routine.

Advantages:
1. An unused routine is never loaded.
2. Useful when large amounts of code are needed to handle infrequently occurring cases.
3. Although the total program-size may be large, the portion that is used (and hence loaded)
may be much smaller.
4. Does not require special support from the OS.

Dynamic Linking and Shared Libraries

• With static linking library modules get fully included in executable modules, wasting
both disk space and main memory usage, because every program that included a certain
routine from the library would have to have their own copy of that routine linked into their
executable code.
• With dynamic linking, however, only a stub is linked into the executable module,
containing references to the actual library module linked in at run time.
• The stub is a small piece of code used to locate the appropriate memory-resident
library-routine.
• This method saves disk space, because the library routines do not need to be fully
included in the executable modules, only the stubs.
• An added benefit of dynamically linked libraries (DLLs, also known as shared
libraries or shared objects on UNIX systems) involves easy upgrades and updates.

Shared libraries
• A library may be replaced by a new version, and all programs that reference the library
will automatically use the new one.
• Version info. is included in both program & library so that programs won't accidentally
execute incompatible versions.
Swapping

• A process must be loaded into memory in order to execute.


• If there is not enough memory available to keep all running processes in memory at the
same time, then some processes that are not currently using the CPU may have their
memory swapped out to a fast local disk called the backing store.
• Swapping is the process of moving a process from memory to backing store and moving
another process from backing store to memory. Swapping is a very slow process compared
to other operations.
• A variant of swapping policy is used for priority-based scheduling algorithms. If a higher-
priority process arrives and wants service, the memory manager can swap out the lower-
priority process and then load and execute the higher-priority process. When the higher-
priority process finishes, the lower-priority process can be swapped back in and continued.
This variant of swapping is called roll out, roll in.

Swapping depends upon address-binding:


• If binding is done at load-time, then process cannot be easily moved to a different
location.
• If binding is done at execution-time, then a process can be swapped into a different
memory-space, because the physical-addresses are computed during execution-time.

Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the
amount of memory swapped.

Disadvantages:
1. Context-switch time is fairly high.
2. If we want to swap a process, we must be sure that it is completely idle.
Two solutions:
i) Never swap a process with pending I/O.
ii) Execute I/O operations only into OS buffers.

Figure: Swapping of two processes using a disk as a backing store


Example:
Assume that the user process is 10 MB in size and the backing store is a standard hard disk with
a transfer rate of 40 MB per second.
The actual transfer of the 10-MB process to or from main memory takes
10000 KB/40000 KB per second = 1/4 second
= 250 milliseconds.
Assuming that no head seeks are necessary, and assuming an average latency of 8 milliseconds,
the swap time is 258 milliseconds. Since we must both swap out and swap in, the total swap time
is about 516 milliseconds.

Difference between Static and Dynamic Memory Allocation


Memory allocation is an important aspect of computer programming, especially when it comes to
creating and managing data structures. When writing code, memory is used to store variables and
data, which can either be allocated statically or dynamically. In this article, we will explore the
difference between static and dynamic memory allocation, the advantages, and disadvantages of
each, and when to use them.

Static Memory Allocation:


Static memory allocation is a memory management technique that involves reserving a fixed
amount of memory for a variable at the time of program compilation. The memory is allocated at
compile time, and the memory remains fixed throughout the life of the program. Static memory
allocation is commonly used for global variables, static variables, and arrays.
Static variables are declared outside the main function and are available throughout the program.
These variables are allocated memory at the time of program compilation. Global variables are
like static variables but are accessible from all the functions in the program. Arrays are also
allocated memory at the time of program compilation, and their size is fixed.
Advantages of Static Memory Allocation:
1. Faster Access: Since the memory is allocated at compile time, accessing static memory is
faster compared to dynamic memory. This is because the memory address is known at the
time of compilation.
2. No Overhead: Static memory allocation does not require any runtime overhead for
memory allocation and deallocation. This makes it more efficient than dynamic memory
allocation.
3. Persistent Data: Static variables and arrays retain their data throughout the life of the
program. This is useful when data needs to be shared between different functions.
Disadvantages of Static Memory Allocation
1. Limited Flexibility: Static memory allocation is inflexible because the size of the memory
is fixed at compile time. This means that if the size of the data structure needs to be
changed, the entire program needs to be recompiled.
2. Wastage of Memory: If the size of the data structure is not known in advance, static
memory allocation can result in the wastage of memory.
3. Limited Scope: Static variables are only accessible within the function where they are
defined, or globally if they are defined outside of any function.
Dynamic Memory Allocation:
Dynamic memory allocation is a memory management technique that involves reserving memory
for variables at runtime. This means that memory is allocated and deallocated as required during
the program execution. Dynamic memory allocation is commonly used for creating data
structures such as linked lists, trees, and dynamic arrays.
The dynamic memory allocation process involves using functions such as malloc(), calloc(),
realloc(), and free(). Malloc() function allocates memory in bytes and returns a pointer to the
allocated memory. Calloc() function allocates memory and initializes it to zero. Realloc() function
is used to change the size of an already allocated memory block. Free() function deallocates the
memory previously allocated by malloc() or calloc().
Advantages of Dynamic Memory Allocation:
1. Flexible Memory Usage: Dynamic memory allocation allows the size of the data structure
to be changed dynamically during program execution. This makes it more flexible than
static memory allocation.
2. Efficient Memory Usage: Dynamic memory allocation allows memory to be allocated
only when it is needed, which makes it more efficient than static memory allocation. This
results in less wastage of memory.
3. Global Access: Dynamic memory can be accessed globally, which means that it can be
shared between different functions.
Disadvantages of Dynamic Memory Allocation:
1. Slower Access: Accessing dynamic memory is slower compared to static memory because
the memory address is not known at compile time. The memory address must be looked up
during program execution.
2. Memory Leaks: Dynamic memory allocation can result in memory leaks if memory is not
deallocated properly. This can cause the program to crash or slow down.
3. Fragmentation: Dynamic memory allocation can result in memory fragmentation if the
memory is not allocated and deallocated properly. Memory fragmentation occurs when
there are small unused gaps between allocated memory blocks. These gaps can prevent
larger memory blocks from being allocated, even if there is enough total memory
available.
When to use Static Memory Allocation:
Static memory allocation is best suited for situations where the size of the data structure is fixed
and known in advance. It is also useful for global variables and variables that need to be accessed
frequently, such as counters or flags. Static memory allocation should be used when memory
usage needs to be optimized, and when there is a need for persistent data that should be available
throughout the life of the program.
When to use Dynamic Memory Allocation:
Dynamic memory allocation is best suited for situations where the size of the data structure is not
known in advance and needs to be changed during program execution. It is also useful for
situations where memory needs to be allocated and deallocated frequently. Dynamic memory
allocation should be used when flexibility and efficiency are important, and when memory usage
needs to be optimized.
Key differences between Static and Dynamic Memory:
Based on Static memory Dynamic Memory

Memory Static memory allocation reserves On the other hand, dynamic


Usage memory at compile time, which memory allocation allocates
means that the memory is allocated memory at runtime, which means
for the entire duration of the program, that memory is only allocated when
regardless of whether the variable is it is needed. his can result in more
used or not. This can result in a waste efficient memory usage as memory
of memory if the variable is not is only reserved when required and
utilized throughout the program's deallocated when no longer needed.
execution.

Memory Static memory allocation has a fixed In contrast, dynamic memory


Flexibility size, which is determined at compile allocation provides flexibility in
time. This means that the size of the resizing the data structure during
data structure cannot be changed runtime using functions such as
during program execution, and any realloc(). This allows for more
changes to the data structure would dynamic and adaptable data
require the recompilation of the entire structures, such as linked lists and
program. dynamic arrays that can grow or
shrink as needed.

Memory Static memory is deallocated Dynamic memory allocation


Deallocation automatically when the program requires explicit deallocation using
terminates, as the memory is reserved the free() function to release the
for the entire duration of the program. memory back to the system when it
is no longer needed.

Memory Accessing static memory is usually On the other hand, accessing


Access faster compared to dynamic memory, dynamic memory requires looking
as the memory address is known at up the memory address during
compile time. This allows for quicker runtime, which can add overhead,
access to the variable's value during and slightly slower access times
program execution. compared to static memory.

Memory Static variables have a global scope, Dynamic memory, on the other
Scope which means that they can be hand, can be locally scoped within a
accessed from any part of the function or shared globally across
program. This can be advantageous in functions as needed, providing more
situations where multiple functions flexibility in controlling the scope
need to share the same data. However, of the data.
it can also lead to potential data
integrity issues if not handled
carefully.

Memory Its allocation does not require explicit Its allocation requires manual
Management memory management, as the memory memory management, including
is allocated and deallocated allocating, resizing, and
automatically by the compiler. deallocating memory using
functions such as malloc(), calloc(),
realloc(), and free().

Conclusion
In summary, static memory allocation and dynamic memory allocation are two memory
management techniques that serve different purposes. Static memory allocation is used when the
size of the data structure is fixed, and memory usage needs to be optimized. Dynamic memory
allocation is used when the size of the data structure is not known in advance, and when
flexibility and efficiency are important.
Both static and dynamic memory allocation have their advantages and disadvantages, and the
choice between them depends on the specific needs of the program. As a programmer, it is
important to understand the differences between these memory allocation techniques and choose
the appropriate one based on the requirements of the program. Proper memory management is
crucial to ensure that the program runs efficiently and without errors.

Contigious and Non Contigious memory allocation in operating system

Introduction
In operating systems, memory allocation refers to the process of assigning memory to different
processes or programs running on a computer system. There are two types of memory allocation
techniques that operating systems use: contiguous and non-contiguous memory allocation. In
contiguous memory allocation, memory is assigned to a process in a contiguous block. In non-
contiguous memory allocation, memory is assigned to a process in non-adjacent blocks.
Contiguous Memory Allocation
Contiguous memory allocation is a technique where the operating system allocates a contiguous
block of memory to a process. This memory is allocated in a single, continuous chunk, making it
easy for the operating system to manage and for the process to access the memory. Contiguous
memory allocation is suitable for systems with limited memory sizes and where fast access to
memory is important.
Contiguous memory allocation can be done in two ways
• Fixed Partitioning − In fixed partitioning, the memory is divided into fixed-size
partitions, and each partition is assigned to a process. This technique is easy to implement
but can result in wasted memory if a process does not fit perfectly into a partition.
• Dynamic Partitioning − In dynamic partitioning, the memory is divided into variablesize
partitions, and each partition is assigned to a process. This technique is more efficient as it
allows the allocation of only the required memory to the process, but it requires more
overhead to keep track of the available memory.
Advantages of Contiguous Memory Allocation
• Simplicity − Contiguous memory allocation is a relatively simple and straightforward
technique for memory management. It requires less overhead and is easy to implement.
• Efficiency − Contiguous memory allocation is an efficient technique for memory
management. Once a process is allocated contiguous memory, it can access the entire
memory block without any interruption.
• Low fragmentation − Since the memory is allocated in contiguous blocks, there is a lower
risk of memory fragmentation. This can result in better memory utilization, as there is less
memory wastage.
Disadvantages of Contiguous Memory Allocation
• Limited flexibility − Contiguous memory allocation is not very flexible as it requires
memory to be allocated in a contiguous block. This can limit the amount of memory that
can be allocated to a process.
• Memory wastage − If a process requires a memory size that is smaller than the contiguous
block allocated to it, there may be unused memory, resulting in memory wastage.
• Difficulty in managing larger memory sizes − As the size of memory increases,
managing contiguous memory allocation becomes more difficult. This is because finding a
contiguous block of memory that is large enough to allocate to a process becomes
challenging.
• External Fragmentation − Over time, external fragmentation may occur as a result of
memory allocation and deallocation, which may result in non − contiguous blocks of free
memory scattered throughout the system.
Overall, contiguous memory allocation is a useful technique for memory management in certain
circumstances, but it may not be the best solution in all situations, particularly when working with
larger amounts of memory or if flexibility is a priority.
Non-contiguous Memory Allocation
Non-contiguous memory allocation, on the other hand, is a technique where the operating system
allocates memory to a process in non-contiguous blocks. The blocks of memory allocated to the
process need not be contiguous, and the operating system keeps track of the various blocks
allocated to the process. Non-contiguous memory allocation is suitable for larger memory sizes
and where efficient use of memory is important.
Non-contiguous memory allocation can be done in two ways
• Paging − In paging, the memory is divided into fixed-size pages, and each page is
assigned to a process. This technique is more efficient as it allows the allocation of only
the required memory to the process.
• Segmentation − In segmentation, the memory is divided into variable-sized segments, and
each segment is assigned to a process. This technique is more flexible than paging but
requires more overhead to keep track of the allocated segments.
Non-contiguous memory allocation is a memory management technique that divides memory into
non-contiguous blocks, allowing processes to be allocated memory that is not necessarily
contiguous. Here are some of the advantages and disadvantages of noncontiguous memory
allocation −
Advantages of Non-Contiguous Memory Allocation
• Reduced External Fragmentation − One of the main advantages of non-contiguous
memory allocation is that it can reduce external fragmentation, as memory can be allocated
in small, non-contiguous blocks.
• Increased Memory Utilization − Non-contiguous memory allocation allows for more
efficient use of memory, as small gaps in memory can be filled with processes that need
less memory.
• Flexibility − This technique allows for more flexibility in allocating and deallocating
memory, as processes can be allocated memory that is not necessarily contiguous.
• Memory Sharing − Non-contiguous memory allocation makes it easier to share memory
between multiple processes, as memory can be allocated in non-contiguous blocks that can
be shared between multiple processes.
Disadvantages of Non-Contiguous Memory Allocation
• Internal Fragmentation − One of the main disadvantages of non-contiguous memory
allocation is that it can lead to internal fragmentation, as memory can be allocated in small,
non-contiguous blocks that are not fully utilized.
• Increased Overhead − This technique requires more overhead than contiguous memory
allocation, as the operating system needs to maintain data structures to track memory
allocation.
• Slower Access − Access to memory can be slower than contiguous memory allocation, as
memory can be allocated in non-contiguous blocks that may require additional steps to
access.
In summary, non-contiguous memory allocation has advantages such as reduced external
fragmentation, increased memory utilization, flexibility, and memory sharing. However, it also
has disadvantages such as internal fragmentation, increased overhead, and slower access to
memory. Operating systems must carefully consider the tradeoffs between these advantages and
disadvantages when selecting memory management techniques.
Difference between contigious and non contigious memory allocation in operating system

Aspect Contiguous Memory Non-Contiguous Memory


Allocation Allocation

Method Allocates memory in a Allocates memory to a process in


contiguous block to a non-contiguous blocks
process

Block Size Memory allocated in a Memory allocated in


single, continuous chunk noncontiguous blocks of varying
sizes

Management Easy to manage by the Requires additional overhead and


operating system can be more complicated to
manage

Memory May result in memory Efficient use of memory and


Usage wastage and external reduces fragmentation within
fragmentation memory blocks

Suitable For Systems with limited Larger memory sizes and systems
amounts of memory and that require more efficient use of
fast access to memory is memory
important
Advantages Simple and efficient More flexible and efficient
technique for memory technique for larger memory sizes
management and systems that require more
efficient use of memory

Disadvantages Can be inflexible and Requires additional overhead and


result in memory wastage can be more complicated to
and fragmentation manage

Conclusion
In conclusion, memory allocation is an important aspect of operating systems, and contiguous and
non-contiguous memory allocation are two techniques used to manage memory. Contiguous
memory allocation is a simple and efficient technique for allocating memory to processes, but it
can result in memory wastage and fragmentation. It is suitable for systems with limited amounts
of memory and where fast access to memory is important. Non-contiguous memory allocation, on
the other hand, is a more flexible and efficient technique for larger memory sizes and systems that
require more efficient use of memory. However, it requires additional overhead and can be more
complicated to manage, particularly in the presence of fragmentation within memory blocks. The
choice between these two techniques depends on the specific requirements of the system in
question, and effective memory management is essential for optimal system performance.

1. What is memory allocation that is not contiguous?


Non-contiguous memory allocation divides the process into blocks (or pages or segments), which
are subsequently allotted to different memory locations in accordance with the quantity of free
memory.
2. Is contiguous or non-contiguous segmentation?
Segmentation is an OS mechanism for allocating non-contiguous memory. Instead of dividing the
process into fixed-size pages, segmentation divides it into modules.
3. What is the allocation of contiguous memory?
When a user process requests memory, one of the parts of the contiguous memory block is given
to that process in accordance with its requirements under the contiguous memory allocation
technique of memory management.

Contiguous Memory Allocation

• The main memory must accommodate both the operating system and the various user
processes. Therefore, we need to allocate the parts of the main memory in the most
efficient way possible.
• Memory is usually divided into 2 partitions: One for the resident OS. One for the user
processes.
• Each process is contained in a single contiguous section of memory.
1. Memory Mapping and Protection

• Memory-protection means protecting OS from user-process and protecting user-


processes from one another.
• Memory-protection is done using
o Relocation-register: contains the value of the smallest physical-address.
o Limit-register: contains the range of logical-addresses.
• Each logical-address must be less than the limit-register.
• The MMU maps the logical-address dynamically by adding the value in the relocation-
register. This mapped-address is sent to memory
• When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit-registers with the correct values.
• Because every address generated by the CPU is checked against these registers, we can
protect the OS from the running-process.
• The relocation-register scheme provides an effective way to allow the OS size to change
dynamically.
• Transient OS code: Code that comes & goes as needed to save memory-space and
overhead for unnecessary swapping.

Figure: Hardware support for relocation and limit-registers

2. Memory Allocation

Two types of memory partitioning are:


1. Fixed-sized partitioning
2. Variable-sized partitioning

1. Fixed-sized Partitioning

• The memory is divided into fixed-sized partitions.


• Each partition may contain exactly one process.
• The degree of multiprogramming is bound by the number of partitions.
• When a partition is free, a process is selected from the input queue and loaded into the
free partition.
• When the process terminates, the partition becomes available for another process.

2. Variable-sized Partitioning

• The OS keeps a table indicating which parts of memory are available and which parts are
occupied.
• A hole is a block of available memory. Normally, memory contains a set of holes of
various sizes.
• Initially, all memory is available for user-processes and considered one large hole.
• When a process arrives, the process is allocated memory from a large hole.
• If we find the hole, we allocate only as much memory as is needed and keep the
remaining memory available to satisfy future requests.
Three strategies used to select a free hole from the set of available holes:

1. First Fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search ended.

2. Best Fit: Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.

3. Worst Fit: Allocate the largest hole. Again, we must search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole.

First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.

3. Fragmentation

Two types of memory fragmentation:


1. Internal fragmentation
2. External fragmentation

1. Internal Fragmentation
• The general approach is to break the physical-memory into fixed-sized blocks and
allocate memory in units based on block size.
• The allocated-memory to a process may be slightly larger than the requested-memory.
• The difference between requested-memory and allocated-memory is called internal
fragmentation i.e. Unused memory that is internal to a partition.
2. External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy a
request but the available-spaces are not contiguous. (i.e. storage is fragmented into a large
number of small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from external
fragmentation.
• Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks
will be lost to fragmentation. This property is known as the 50-percent rule.

Two solutions to external fragmentation:


• Compaction: The goal is to shuffle the memory-contents to place all free memory together
in one large hole. Compaction is possible only if relocation is dynamic and done at
execution-time
• Permit the logical-address space of the processes to be non-contiguous. This allows a
process to be allocated physical-memory wherever such memory is available. Two
techniques achieve this solution: 1) Paging and 2) Segmentation.
Paging

• Paging is a memory-management scheme.


• This permits the physical-address space of a process to be non-contiguous.
• This also solves the considerable problem of fitting memory-chunks of varying sizes
onto the backing-store.
• Traditionally: Support for paging has been handled by hardware.
• Recent designs: The hardware & OS are closely integrated.

Basic Method of Paging

• The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size called
pages.
• When a process is to be executed, its pages are loaded into any available memory frames
from the backing store.
• The backing store is divided into fixed-sized blocks that are of the same size as the memory
frames.

The hardware support for paging is illustrated in Figure 1.


Figure 1: Paging hardware

• Address generated by CPU is divided into 2 parts (Figure 2):


1. Page-number (p) is used as an index to the page-table. The page-table contains the
base-address of each page in physical-memory.
2. Offset (d) is combined with the base-address to define the physical-address. This
physical-address is sent to the memory-unit.
• The page table maps the page number to a frame number, to yield a physical address
• The page table maps the page number to a frame number, to yield a physical address
which also has two parts: The frame number and the offset within that frame.
• The number of bits in the frame number determines how many frames the system can
address, and the number of bits in the offset determines the size of each frame.

The paging model of memory is shown in Figure 2.

Figure 2: Paging model of logical and physical memory.

• The page size (like the frame size) is defined by the hardware.
• The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per
page, depending on the computer architecture.
• The selection of a power of 2 as a page size makes the translation of a logical address into
a page number and page offset.
• If the size of logical address space is 2m and a page size is 2n addressing units (bytes or
words), then the high-order m – n bits of a logical address designate the page number, and
the n low-order bits designate the page offset.

Thus, the logical address is as follows:


page number page offset
p d
m -n n

• When a process requests memory (e.g. when its code is loaded in from disk), free frames
are allocated from a free-frame list, and inserted into that process's page table.
• Processes are blocked from accessing anyone else's memory because all of their memory
requests are mapped through their page table. There is no way for them to generate an
address that maps into any other process's memory space.
• The operating system must keep track of each individual process's page table, updating it
whenever the process's pages get moved in and out of memory, and applying the correct
page table when processing system calls for a particular process. This all increases the
overhead involved when swapping processes in and out of the CPU.

Figure: Free frames (a) before allocation and (b) after allocation.

Hardware Support

Translation Look aside Buffer

• A special, small, fast lookup hardware cache, called a translation look-aside buffer (TLB).
• Each entry in the TLB consists of two parts: a key (or tag) and a value.
• When the associative memory is presented with an item, the item is compared with all
keys simultaneously. If the item is found, the corresponding value field is returned. The
search is fast; the hardware, however, is expensive. Typically, the number of entries in a
TLB is small, often numbering between 64 and 1,024.
• The TLB contains only a few of the page-table entries.

Working:
• When a logical-address is generated by the CPU, its page-number is presented to the
TLB.
• If the page-number is found (TLB hit), its frame-number is immediately available and
used to access memory
• If page-number is not in TLB (TLB miss), a memory-reference to page table must be
made. The obtained frame-number can be used to access memory (Figure 1)

Figure 1: Paging hardware with TLB

• In addition, we add the page-number and frame-number to the TLB, so that they will be
found quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of times that a particular page-number is found in the TLB is called hit ratio.

Advantage: Search operation is fast.


Disadvantage: Hardware is expensive.

• Some TLBs have wired down entries that can't be removed.


• Some TLBs store ASID (address-space identifier) in each entry of the TLB that uniquely
identify each process and provide address space protection for that process.
Protection

• Memory-protection is achieved by protection-bits for each frame.


• The protection-bits are kept in the page-table.
• One protection-bit can define a page to be read-write or read-only.
• Every reference to memory goes through the page-table to find the correct frame-
number.
• Firstly, the physical-address is computed. At the same time, the protection-bit is checked
to verify that no writes are being made to a read-only page.
• An attempt to write to a read-only page causes a hardware-trap to the OS (or memory
protection violation).
Valid Invalid Bit
• This bit is attached to each entry in the page-table.
• Valid bit: “valid” indicates that the associated page is in the process’ logical address
space, and is thus a legal page
• Invalid bit: “invalid” indicates that the page is not in the process’ logical address space

Illegal addresses are trapped by use of valid-invalid bit.


The OS sets this bit for each page to allow or disallow access to the page.

Figure: Valid (v) or invalid (i) bit in a page-table

Shared Pages

• An advantage of paging is the possibility of sharing common code.


• Re-entrant code (Pure Code) is non-self-modifying code, it never changes during
execution.
• Two or more processes can execute the same code at the same time.
• Each process has its own copy of registers and data-storage to hold the data for the
process's execution.
• The data for 2 different processes will be different.
• Only one copy of the editor need be kept in physical-memory (Figure 5.12).
• Each user's page-table maps onto the same physical copy of the editor, but data pages are
mapped onto different frames.

Disadvantage:
Systems that use inverted page-tables have difficulty implementing shared-memory.

Figure: Sharing of code in a paging environment

Segmentation

Basic Method of Segmentation

• This is a memory-management scheme that supports user-view of memory (Figure 1).


• A logical-address space is a collection of segments.
• Each segment has a name and a length.
• The addresses specify both segment-name and offset within the segment.
• Normally, the user-program is compiled, and the compiler automatically constructs
segments reflecting the input program.
• For ex: The code, Global variables, The heap, from which memory is allocated, The
stacks used by each thread, The standard C library
Figure: Programmer’s view of a program

Hardware support for Segmentation


• Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical
addresses.
• In the segment-table, each entry has following 2 fields:
1. Segment-base contains starting physical-address where the segment resides in
memory.
2. Segment-limit specifies the length of the segment (Figure 2).
• A logical-address consists of 2 parts:
1. Segment-number(s) is used as an index to the segment-table
2. Offset(d) must be between 0 and the segment-limit.
• If offset is not between 0 & segment-limit, then we trap to the OS(logical-addressing
attempt beyond end of segment).
• If offset is legal, then it is added to the segment-base to produce the physical-memory
address.

Figure: Segmentation hardware


Difference Between Paging and Segmentation

S.NO Paging Segmentation

In paging, the program is divided into In segmentation, the program is divided into
1.
fixed or mounted size pages. variable size sections.

For the paging operating system is


2. For segmentation compiler is accountable.
accountable.

3. Page size is determined by hardware. Here, the section size is given by the user.

It is faster in comparison to
4. Segmentation is slow.
segmentation.

Paging could result in internal Segmentation could result in external


5.
fragmentation. fragmentation.

In paging, the logical address is split Here, the logical address is split into section
6.
into a page number and page offset. number and section offset.

While segmentation also comprises the


Paging comprises a page table that
7. segment table which encloses segment
encloses the base address of every page.
number and segment offset.

The page table is employed to keep up


8. Section Table maintains the section data.
the page data.

In paging, the operating system must In segmentation, the operating system


9.
maintain a free frame list. maintains a list of holes in the main memory.

10. Paging is invisible to the user. Segmentation is visible to the user.

In paging, the processor needs the page In segmentation, the processor uses segment
11. number, and offset to calculate the number, and offset to calculate the full
absolute address. address.

It is hard to allow sharing of procedures Facilitates sharing of procedures between the


12.
between processes. processes.
S.NO Paging Segmentation

In paging, a programmer cannot


13 It can efficiently handle data structures.
efficiently handle data structure.

14. This protection is hard to apply. Easy to apply for protection in segmentation.

The size of the page needs always be There is no constraint on the size of
15.
equal to the size of frames. segments.

A page is referred to as a physical unit A segment is referred to as a logical unit of


16.
of information. information.

Segmentation results in a more efficient


17. Paging results in a less efficient system.
system.

Advantages of Segmented Paging


1. The page table size is reduced as pages are present only for data of segments, hence
reducing the memory requirements.
2. Gives a programmers view along with the advantages of paging.
3. Reduces external fragmentation in comparison with segmentation.
4. Since the entire segment need not be swapped out, the swapping out into virtual memory
becomes easier .
Disadvantages of Segmented Paging
1. Internal fragmentation still exists in pages.
2. Extra hardware is required
3. Translation becomes more sequential increasing the memory access time.
4. External fragmentation occurs because of varying sizes of page tables and varying sizes of
segment tables in today’s systems.
Advantages of Paged Segmentation
1. No external fragmentation
2. Reduced memory requirements as no. of pages limited to segment size.
3. Page table size is smaller just like segmented paging,
4. Similar to segmented paging, the entire segment need not be swapped out.
5. Increased flexibility in memory allocation: Paged Segmentation allows for a flexible
allocation of memory, where each segment can have a different size, and each page can
have a different size within a segment.
6. Improved protection and security: Paged Segmentation provides better protection and
security by isolating each segment and its pages, preventing a single segment from
affecting the entire process’s memory.
Increased program structure: Paged Segmentation provides a natural program structure,
with each segment representing a different logical part of a program.
7. Improved error detection and recovery: Paged Segmentation enables the detection of
memory errors and the recovery of individual segments, rather than the entire process’s
memory.
8. Reduced overhead in memory management: Paged Segmentation reduces the overhead in
memory management by eliminating the need to maintain a single, large page table for the
entire process’s memory.
9. Improved memory utilization: Paged Segmentation can improve memory utilization by
reducing fragmentation and allowing for the allocation of larger blocks of contiguous
memory to each segment.
Disadvantages of Paged Segmentation
1. Internal fragmentation remains a problem.
2. Hardware is complexer than segmented paging.
3. Extra level of paging at first stage adds to the delay in memory access.
4. Increased complexity in memory management: Paged Segmentation introduces additional
complexity in the memory management process, as it requires the maintenance of multiple
page tables for each segment, rather than a single page table for the entire process’s
memory.
5. Increased overhead in memory access: Paged Segmentation introduces additional overhead
in memory access, as it requires multiple lookups in multiple page tables to access a single
memory location.
6. Reduced performance: Paged Segmentation can result in reduced performance, as the
additional overhead in memory management and access can slow down the overall
process.
7. Increased storage overhead: Paged Segmentation requires additional storage overhead, as it
requires additional data structures to store the multiple page tables for each segment.
8. Increased code size: Paged Segmentation can result in increased code size, as the
additional code required to manage the multiple page tables can take up valuable memory
space.
9. Reduced address space: Paged Segmentation can result in a reduced address space, as
some of the available memory

You might also like