Memory Management
Memory Management
■ The multiple processes are accommodated in main memory due to various task are carried out by
the OS and hardware.
■ The CPU will be idle state If long time all processes will be waiting for I/O when few processes can
be kept in main memory,
■ Therefore, memory needs to be allocated efficiently in order to pack as many of the processes into
memory.
■ The kernel loads into fixed portion of main memory and the rest portion is shared by the multiple
processes.
Some Terminology of Memory Management
■ Frame
– A fixed-length block of primary/main memory.
■ Page
– A fixed-length block of information/data that stores in secondary memory (disk, tape etc.). A
page of data may temporarily be copied into a frame of main memory.
■ Segment
– A variable-length block of data that loads in secondary memory. A whole segment may
temporarily be copied into an accessible section of main memory (called segmentation) or
the segment may be divided into pages which can be separately copied into main memory
(referred as combined segmentation and paging).
Logical vs. Physical Address Space
■ The proper memory management is dependent on the concept of a logical address space that is
bound to a separate physical address space.
Logical address – the space created by the CPU (or created by a program ), also called virtual address
Physical address – the address related to the logical address seen by the memory.
■ In address-binding schemes, logical and physical addresses are identical in compile-time and load-
time.
■ But, these two addresses (virtual and physical) are differed in execution-time.
Memory-Management Unit (MMU)
■ MMU is the hardware device that maps virtual to physical address.
■ In MMU scheme, the value in the relocation register is added to every address created by a user
program at the time it is sent to memory.
■ The user program works with logical addresses and never realizes the real physical address.
Memory Management Techniques
Swapping
■ Swapping is a method in which a process can be swapped/move temporarily out of main memory to
secondary memory and create that memory available to other processes.
■ After some time later, the OS swaps back (when required) the process from the secondary memory
to main memory.
■ However, the performance is generally affected due to swapping process but it helps in executing
large and multiple processes in parallel and for that reason Swapping is also called a memory
compaction process.
■ The total time for swapping technique includes the time takes to move the processes from primary
memory to a secondary drive and then the processes back to primary memory, as well as the
processes take time to regain main memory.
■ Let, size of the user process is 2048KB and data transfer rate around 1 MB (1024KB) per second on a
standard hard disk where swapping will take place.
■ The actual time for transfer process to or from memory is = 2048KB / 1024KB per second = 2
seconds = 2000 milliseconds.
■ So, for in and out time, it will take to complete 4000 milliseconds plus other overhead where the
process competes to regain main memory.
Swapping Contd…
■
Internal fragmentation
■ Suppose, memory block assigned to process is bigger. Some portion of memory is left unused, as
it cannot be used by another process.
Dynamic Partitioning : an example
■ A hole of 64K is left after
loading 3 processes: not
enough space for another
process.
■ Finally, each process is
blocked. The OS swaps out
process 2 to bring in process
4.
Dynamic Partitioning : an example
Contd…
■ another hole of 96K is created
■ Ultimately, each process is
blocked. The OS swaps out
process 1 to bring in again
process 2 and another hole of
96K is created...
■ Compaction would produce a
single hole of 256K
Dynamic Partitioning Contd…
Strengths
■ No internal fragmentation, more efficient use of main
memory.
Weaknesses
■ Inefficient use of processor due to the need for
compaction to counter external fragmentation.
Placement Algorithm
■ Used to choose which free block to allocate to a process
■ Goal: to reduce usage of compaction (time consuming
process)
■ Possible algorithms:
– Best-fit: chooses the block that is nearby in size to the
request.
– First-fit: begins to scan memory from the beginning and chooses the first available block that is
large enough.
– Next-fit: begins to scan memory from the location of the last placement and chooses the next
available block that is large enough.
Paging
■ Paging is a memory management performance in which address space of process is broken into
blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The
size of the process is measured in the number of pages.
■ Main memory is partition into equal fixed-sized spaces (relatively small size).
■ Also, each process is also divided into spaces of the same size called pages.
■ The pages can be assigned to the available spaces in main memory called frames (or page frames).
■ Consequence: a process (collection of pages) does not necessity to occupy a contiguous memory
space. The size of a frame is kept the same as that of a page to have optimum utilization of the main
memory and to avoid external fragmentation.
Paging Contd…
■ Now suppose that process B is
swapped out
Paging Contd…
■ When process A and C are blocked, the pager loads a
new process D consisting of 5 pages.
■ Process D does not occupied a contiguous portion of
memory.
■ There is no external fragmentation.
■ Internal fragmentation consist only of the last page of
each process.
■ Page address is called logical address and represented
by page number and the offset.
■ Logical address = Page Number + Page Offset
■ Frame address is called physical address and represented by a frame number and the offset.
■ Physical address = Frame Number + Page Offset
■ A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
Page Tables
■ The OS now needs to maintain (in main memory) a page table for each process.
■ Each entry of a page table consist of the frame number where the corresponding page is physically
located.
■ The page table is indexed by the page number to obtain the frame number.
■ A free frame list is maintained for available pages,
Paging Contd…
Logical address used in paging
■ Within each page, each logical address must consist of a page number and an offset
within the page.
■ A CPU register always holds the starting physical address of the page table of the
currently running process.
■ With the help of logical address (page number, offset), the processor accesses the
page table to obtain the physical address (frame number, offset)
■ The logical address becomes a relative address where the page size is a power of 2.
■ Example: if 16 bits addresses are used and page size = 1K, there needs 10 bits for
offset and rest 6 bits are available for page number
■ The 16 bit address obtained with the 10 least significant bit as offset and 6 most
significant bit as page number is a location relative to the beginning of the process.
Paging Contd…
Logical address in paging
■ The pages (page size of a power of 2) are invisible to the programmer or compiler.
■ Address translation at execution time is easy to implement in hardware
– logical address (n, m) gets translated to physical address (k, m) by indexing the page table
and appending the same offset m to the frame number k.
Logical-to-Physical Address Translation in Paging
Paging Contd…
Paging Contd…
Advantages
■ Paging is simple to implement and an efficient memory management method.
■ Paging reduces external fragmentation.
■ Due to the equal size of the pages and frames, swapping becomes very easy.
Disadvantages
■ Paging reduces external fragmentation, but still suffer from internal fragmentation.
■ The Page table requires extra memory space, so may not be good for a system having small RAM.
Segmentation
■ Segmentation is a memory management technique in which a program is subdivided into several
segments of different sizes, one for each module of program that perform specific function.
■ Each segment is a different logical address space of the program.
■ When a process loaded into the main memory, its different segments can be stored anywhere.
■ Each segment is fully packed with data, so no internal fragmentation is occurred.
■ The external fragmentation is reduced when using small segments.
■ When a process is to be executed, its corresponding segments are loaded into non-contiguous
memory space, however one segment is loaded into a contiguous block of available memory.
■ Segmentation works very similar to paging but here segments are of variable-length where as in
paging, pages are of fixed size.
■ A program segment contains the program's main function, utility functions, data structures, and so
on. The OS maintains a segment map table for every process and a list of free memory blocks along
with segment numbers, their size and corresponding memory locations in main memory. For each
segment, the table stores the starting address of segment and the length of segment. A reference to
a memory location includes a value that identifies a segment and an offset.
Segmentation Contd…
■ For each segment, table stores the starting
address of the segment and the length of
the segment. A reference to a memory
location includes a value that identifies a
segment and an offset.
File System
■ A file is a group of connected information that is stored in secondary memory such as magnetic
tapes, magnetic disks and optical disks.
■ A file is a sequence of bits, bytes, lines or records whose meaning is defined by the files maker and
user.
File Structure
■ A file is a defined structure according to its type.
■ A text file is a sequence of characters organized into lines.
■ A source file is a sequence of methods and functions.
■ An object file is a sequence of bytes organized into blocks that are understandable by the computer.
■ When OS defines different file structures, it also contains the code to support these file structure.
■ Unix, MS-DOS support minimum number of file structure.