Os Unit 3
Os Unit 3
Memory Management
Main Memory Management Strategies
• User programs typically refer to memory addresses with symbolic names. These symbolic
names must be mapped or bound to physical memory addresses.
• Address binding of instructions to memory-addresses can happen at 3 different stages.
1. Compile Time - If it is known at compile time where a program will reside in physical
memory, then absolute code can be generated by the compiler, containing actual physical
addresses. However, if the load address changes at some later time, then the program will
have to be recompiled.
2. Load Time - If the location at which a program will be loaded is not known at compile
time, then the compiler must generate relocatable code, which references addresses
relative to the start of the program. If that starting address changes, then the program must
be reloaded but not recompiled.
3. Execution Time - If a program can be moved around in memory during the course of its
execution, then binding must be delayed until execution time.
Figure: Multistep processing of a user program
• The address generated by the CPU is a logical address, whereas the memory address
where programs are actually stored is a physical address.
• The set of all logical addresses used by a program composes the logical address space,
and the set of all corresponding physical addresses composes the physical address space.
• The run time mapping of logical to physical addresses is handled by the memory-
management unit (MMU).
• One of the simplest is a modification of the base-register scheme.
• The base register is termed a relocation register
• The value in the relocation-register is added to every address generated by a
user-process at the time it is sent to memory.
• The user-program deals with logical-addresses; it never sees the real physical-
addresses.
Advantages:
1. An unused routine is never loaded.
2. Useful when large amounts of code are needed to handle infrequently occurring cases.
3. Although the total program-size may be large, the portion that is used (and hence loaded)
may be much smaller.
4. Does not require special support from the OS.
• With static linking library modules get fully included in executable modules, wasting
both disk space and main memory usage, because every program that included a certain
routine from the library would have to have their own copy of that routine linked into their
executable code.
• With dynamic linking, however, only a stub is linked into the executable module,
containing references to the actual library module linked in at run time.
• The stub is a small piece of code used to locate the appropriate memory-resident
library-routine.
• This method saves disk space, because the library routines do not need to be fully
included in the executable modules, only the stubs.
• An added benefit of dynamically linked libraries (DLLs, also known as shared
libraries or shared objects on UNIX systems) involves easy upgrades and updates.
Shared libraries
• A library may be replaced by a new version, and all programs that reference the library
will automatically use the new one.
• Version info. is included in both program & library so that programs won't accidentally
execute incompatible versions.
Swapping
Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the
amount of memory swapped.
Disadvantages:
1. Context-switch time is fairly high.
2. If we want to swap a process, we must be sure that it is completely idle.
Two solutions:
i) Never swap a process with pending I/O.
ii) Execute I/O operations only into OS buffers.
Memory Static variables have a global scope, Dynamic memory, on the other
Scope which means that they can be hand, can be locally scoped within a
accessed from any part of the function or shared globally across
program. This can be advantageous in functions as needed, providing more
situations where multiple functions flexibility in controlling the scope
need to share the same data. However, of the data.
it can also lead to potential data
integrity issues if not handled
carefully.
Memory Its allocation does not require explicit Its allocation requires manual
Management memory management, as the memory memory management, including
is allocated and deallocated allocating, resizing, and
automatically by the compiler. deallocating memory using
functions such as malloc(), calloc(),
realloc(), and free().
Conclusion
In summary, static memory allocation and dynamic memory allocation are two memory
management techniques that serve different purposes. Static memory allocation is used when the
size of the data structure is fixed, and memory usage needs to be optimized. Dynamic memory
allocation is used when the size of the data structure is not known in advance, and when
flexibility and efficiency are important.
Both static and dynamic memory allocation have their advantages and disadvantages, and the
choice between them depends on the specific needs of the program. As a programmer, it is
important to understand the differences between these memory allocation techniques and choose
the appropriate one based on the requirements of the program. Proper memory management is
crucial to ensure that the program runs efficiently and without errors.
Introduction
In operating systems, memory allocation refers to the process of assigning memory to different
processes or programs running on a computer system. There are two types of memory allocation
techniques that operating systems use: contiguous and non-contiguous memory allocation. In
contiguous memory allocation, memory is assigned to a process in a contiguous block. In non-
contiguous memory allocation, memory is assigned to a process in non-adjacent blocks.
Contiguous Memory Allocation
Contiguous memory allocation is a technique where the operating system allocates a contiguous
block of memory to a process. This memory is allocated in a single, continuous chunk, making it
easy for the operating system to manage and for the process to access the memory. Contiguous
memory allocation is suitable for systems with limited memory sizes and where fast access to
memory is important.
Contiguous memory allocation can be done in two ways
• Fixed Partitioning − In fixed partitioning, the memory is divided into fixed-size
partitions, and each partition is assigned to a process. This technique is easy to implement
but can result in wasted memory if a process does not fit perfectly into a partition.
• Dynamic Partitioning − In dynamic partitioning, the memory is divided into variablesize
partitions, and each partition is assigned to a process. This technique is more efficient as it
allows the allocation of only the required memory to the process, but it requires more
overhead to keep track of the available memory.
Advantages of Contiguous Memory Allocation
• Simplicity − Contiguous memory allocation is a relatively simple and straightforward
technique for memory management. It requires less overhead and is easy to implement.
• Efficiency − Contiguous memory allocation is an efficient technique for memory
management. Once a process is allocated contiguous memory, it can access the entire
memory block without any interruption.
• Low fragmentation − Since the memory is allocated in contiguous blocks, there is a lower
risk of memory fragmentation. This can result in better memory utilization, as there is less
memory wastage.
Disadvantages of Contiguous Memory Allocation
• Limited flexibility − Contiguous memory allocation is not very flexible as it requires
memory to be allocated in a contiguous block. This can limit the amount of memory that
can be allocated to a process.
• Memory wastage − If a process requires a memory size that is smaller than the contiguous
block allocated to it, there may be unused memory, resulting in memory wastage.
• Difficulty in managing larger memory sizes − As the size of memory increases,
managing contiguous memory allocation becomes more difficult. This is because finding a
contiguous block of memory that is large enough to allocate to a process becomes
challenging.
• External Fragmentation − Over time, external fragmentation may occur as a result of
memory allocation and deallocation, which may result in non − contiguous blocks of free
memory scattered throughout the system.
Overall, contiguous memory allocation is a useful technique for memory management in certain
circumstances, but it may not be the best solution in all situations, particularly when working with
larger amounts of memory or if flexibility is a priority.
Non-contiguous Memory Allocation
Non-contiguous memory allocation, on the other hand, is a technique where the operating system
allocates memory to a process in non-contiguous blocks. The blocks of memory allocated to the
process need not be contiguous, and the operating system keeps track of the various blocks
allocated to the process. Non-contiguous memory allocation is suitable for larger memory sizes
and where efficient use of memory is important.
Non-contiguous memory allocation can be done in two ways
• Paging − In paging, the memory is divided into fixed-size pages, and each page is
assigned to a process. This technique is more efficient as it allows the allocation of only
the required memory to the process.
• Segmentation − In segmentation, the memory is divided into variable-sized segments, and
each segment is assigned to a process. This technique is more flexible than paging but
requires more overhead to keep track of the allocated segments.
Non-contiguous memory allocation is a memory management technique that divides memory into
non-contiguous blocks, allowing processes to be allocated memory that is not necessarily
contiguous. Here are some of the advantages and disadvantages of noncontiguous memory
allocation −
Advantages of Non-Contiguous Memory Allocation
• Reduced External Fragmentation − One of the main advantages of non-contiguous
memory allocation is that it can reduce external fragmentation, as memory can be allocated
in small, non-contiguous blocks.
• Increased Memory Utilization − Non-contiguous memory allocation allows for more
efficient use of memory, as small gaps in memory can be filled with processes that need
less memory.
• Flexibility − This technique allows for more flexibility in allocating and deallocating
memory, as processes can be allocated memory that is not necessarily contiguous.
• Memory Sharing − Non-contiguous memory allocation makes it easier to share memory
between multiple processes, as memory can be allocated in non-contiguous blocks that can
be shared between multiple processes.
Disadvantages of Non-Contiguous Memory Allocation
• Internal Fragmentation − One of the main disadvantages of non-contiguous memory
allocation is that it can lead to internal fragmentation, as memory can be allocated in small,
non-contiguous blocks that are not fully utilized.
• Increased Overhead − This technique requires more overhead than contiguous memory
allocation, as the operating system needs to maintain data structures to track memory
allocation.
• Slower Access − Access to memory can be slower than contiguous memory allocation, as
memory can be allocated in non-contiguous blocks that may require additional steps to
access.
In summary, non-contiguous memory allocation has advantages such as reduced external
fragmentation, increased memory utilization, flexibility, and memory sharing. However, it also
has disadvantages such as internal fragmentation, increased overhead, and slower access to
memory. Operating systems must carefully consider the tradeoffs between these advantages and
disadvantages when selecting memory management techniques.
Difference between contigious and non contigious memory allocation in operating system
Suitable For Systems with limited Larger memory sizes and systems
amounts of memory and that require more efficient use of
fast access to memory is memory
important
Advantages Simple and efficient More flexible and efficient
technique for memory technique for larger memory sizes
management and systems that require more
efficient use of memory
Conclusion
In conclusion, memory allocation is an important aspect of operating systems, and contiguous and
non-contiguous memory allocation are two techniques used to manage memory. Contiguous
memory allocation is a simple and efficient technique for allocating memory to processes, but it
can result in memory wastage and fragmentation. It is suitable for systems with limited amounts
of memory and where fast access to memory is important. Non-contiguous memory allocation, on
the other hand, is a more flexible and efficient technique for larger memory sizes and systems that
require more efficient use of memory. However, it requires additional overhead and can be more
complicated to manage, particularly in the presence of fragmentation within memory blocks. The
choice between these two techniques depends on the specific requirements of the system in
question, and effective memory management is essential for optimal system performance.
• The main memory must accommodate both the operating system and the various user
processes. Therefore, we need to allocate the parts of the main memory in the most
efficient way possible.
• Memory is usually divided into 2 partitions: One for the resident OS. One for the user
processes.
• Each process is contained in a single contiguous section of memory.
1. Memory Mapping and Protection
2. Memory Allocation
1. Fixed-sized Partitioning
2. Variable-sized Partitioning
• The OS keeps a table indicating which parts of memory are available and which parts are
occupied.
• A hole is a block of available memory. Normally, memory contains a set of holes of
various sizes.
• Initially, all memory is available for user-processes and considered one large hole.
• When a process arrives, the process is allocated memory from a large hole.
• If we find the hole, we allocate only as much memory as is needed and keep the
remaining memory available to satisfy future requests.
Three strategies used to select a free hole from the set of available holes:
1. First Fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search ended.
2. Best Fit: Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.
3. Worst Fit: Allocate the largest hole. Again, we must search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
3. Fragmentation
1. Internal Fragmentation
• The general approach is to break the physical-memory into fixed-sized blocks and
allocate memory in units based on block size.
• The allocated-memory to a process may be slightly larger than the requested-memory.
• The difference between requested-memory and allocated-memory is called internal
fragmentation i.e. Unused memory that is internal to a partition.
2. External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy a
request but the available-spaces are not contiguous. (i.e. storage is fragmented into a large
number of small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from external
fragmentation.
• Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks
will be lost to fragmentation. This property is known as the 50-percent rule.
• The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size called
pages.
• When a process is to be executed, its pages are loaded into any available memory frames
from the backing store.
• The backing store is divided into fixed-sized blocks that are of the same size as the memory
frames.
• The page size (like the frame size) is defined by the hardware.
• The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per
page, depending on the computer architecture.
• The selection of a power of 2 as a page size makes the translation of a logical address into
a page number and page offset.
• If the size of logical address space is 2m and a page size is 2n addressing units (bytes or
words), then the high-order m – n bits of a logical address designate the page number, and
the n low-order bits designate the page offset.
• When a process requests memory (e.g. when its code is loaded in from disk), free frames
are allocated from a free-frame list, and inserted into that process's page table.
• Processes are blocked from accessing anyone else's memory because all of their memory
requests are mapped through their page table. There is no way for them to generate an
address that maps into any other process's memory space.
• The operating system must keep track of each individual process's page table, updating it
whenever the process's pages get moved in and out of memory, and applying the correct
page table when processing system calls for a particular process. This all increases the
overhead involved when swapping processes in and out of the CPU.
Figure: Free frames (a) before allocation and (b) after allocation.
Hardware Support
• A special, small, fast lookup hardware cache, called a translation look-aside buffer (TLB).
• Each entry in the TLB consists of two parts: a key (or tag) and a value.
• When the associative memory is presented with an item, the item is compared with all
keys simultaneously. If the item is found, the corresponding value field is returned. The
search is fast; the hardware, however, is expensive. Typically, the number of entries in a
TLB is small, often numbering between 64 and 1,024.
• The TLB contains only a few of the page-table entries.
Working:
• When a logical-address is generated by the CPU, its page-number is presented to the
TLB.
• If the page-number is found (TLB hit), its frame-number is immediately available and
used to access memory
• If page-number is not in TLB (TLB miss), a memory-reference to page table must be
made. The obtained frame-number can be used to access memory (Figure 1)
• In addition, we add the page-number and frame-number to the TLB, so that they will be
found quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of times that a particular page-number is found in the TLB is called hit ratio.
Shared Pages
Disadvantage:
Systems that use inverted page-tables have difficulty implementing shared-memory.
Segmentation
In paging, the program is divided into In segmentation, the program is divided into
1.
fixed or mounted size pages. variable size sections.
3. Page size is determined by hardware. Here, the section size is given by the user.
It is faster in comparison to
4. Segmentation is slow.
segmentation.
In paging, the logical address is split Here, the logical address is split into section
6.
into a page number and page offset. number and section offset.
In paging, the processor needs the page In segmentation, the processor uses segment
11. number, and offset to calculate the number, and offset to calculate the full
absolute address. address.
14. This protection is hard to apply. Easy to apply for protection in segmentation.
The size of the page needs always be There is no constraint on the size of
15.
equal to the size of frames. segments.