0% found this document useful (0 votes)
11 views

Memory Management

Uploaded by

zanphoto100
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Memory Management

Uploaded by

zanphoto100
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Memory Management

■ The multiple processes are accommodated in main memory due to various task are carried out by
the OS and hardware.
■ The CPU will be idle state If long time all processes will be waiting for I/O when few processes can
be kept in main memory,
■ Therefore, memory needs to be allocated efficiently in order to pack as many of the processes into
memory.
■ The kernel loads into fixed portion of main memory and the rest portion is shared by the multiple
processes.
Some Terminology of Memory Management
■ Frame
– A fixed-length block of primary/main memory.
■ Page
– A fixed-length block of information/data that stores in secondary memory (disk, tape etc.). A
page of data may temporarily be copied into a frame of main memory.
■ Segment
– A variable-length block of data that loads in secondary memory. A whole segment may
temporarily be copied into an accessible section of main memory (called segmentation) or
the segment may be divided into pages which can be separately copied into main memory
(referred as combined segmentation and paging).
Logical vs. Physical Address Space
■ The proper memory management is dependent on the concept of a logical address space that is
bound to a separate physical address space.
Logical address – the space created by the CPU (or created by a program ), also called virtual address
Physical address – the address related to the logical address seen by the memory.
■ In address-binding schemes, logical and physical addresses are identical in compile-time and load-
time.
■ But, these two addresses (virtual and physical) are differed in execution-time.
Memory-Management Unit (MMU)
■ MMU is the hardware device that maps virtual to physical address.
■ In MMU scheme, the value in the relocation register is added to every address created by a user
program at the time it is sent to memory.
■ The user program works with logical addresses and never realizes the real physical address.
Memory Management Techniques
Swapping
■ Swapping is a method in which a process can be swapped/move temporarily out of main memory to
secondary memory and create that memory available to other processes.
■ After some time later, the OS swaps back (when required) the process from the secondary memory
to main memory.
■ However, the performance is generally affected due to swapping process but it helps in executing
large and multiple processes in parallel and for that reason Swapping is also called a memory
compaction process.
■ The total time for swapping technique includes the time takes to move the processes from primary
memory to a secondary drive and then the processes back to primary memory, as well as the
processes take time to regain main memory.
■ Let, size of the user process is 2048KB and data transfer rate around 1 MB (1024KB) per second on a
standard hard disk where swapping will take place.
■ The actual time for transfer process to or from memory is = 2048KB / 1024KB per second = 2
seconds = 2000 milliseconds.
■ So, for in and out time, it will take to complete 4000 milliseconds plus other overhead where the
process competes to regain main memory.
Swapping Contd…

Memory Management Techniques Contd…


Fixed Partitioning
■ The main memory is divided into a set of non
overlapping static regions called partitions.
■ Partitions can be of equal or unequal sizes.
■ Any process whose size is less than or equal to a size
of partition can be loaded into the partition.
■ If all partitions are occupied, the OS can swap a
process out of a partition.
Strengths
■ Easy to implement.
■ Little OS overhead.
Fixed Partitioning Contd…
Disadvantages
■ A process may be too large to fit in a partition. The programmer must then design the program
with overlaps.
■ Main memory utilization is inefficient
– any process, irrespective of size, occupies an whole partition
– internal fragmentation
 wasted space due to the block of data loaded being smaller than the partition.
■ The number of partitions stated at system generation time and the number of active processes is
created at execution time.
■ Small process will not utilize partition space efficiently.
Equal-size partitions
■ If there is an vacant partition, then a process can be loaded into that partition because all partitions
are of equal size, it does not matter which partition is used.
■ If all partitions are occupied by the processes, choose one process to swap out from memory to
make room for the new process.
Fixed Partitioning Contd…
Unequal-size partitions: use of multiple queues
■ Assign each process to the smallest sized partition within
which it will fit.
■ A queue for each partition size
■ Tries to minimize internal fragmentation
Problem
■ Some queues will be empty if no processes within those
size ranges are present.

Memory Management Techniques Contd…


Fragmentation
■ When the processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens that after sometime, the processes cannot be allocated to memory blocks
considering their small size and tiny memory blocks remains unused. This problem is known as
Fragmentation.
Dynamic Partitioning
■ Partitions are available in variable length and number.
■ Partitions are created dynamically, so that each process is loaded into a partition of exactly the
same size as that process.
■ Finally holes are formed in main memory. This is called external fragmentation.
– Total memory space is enough to satisfy a request or to store a process in it, but holes
(unused space) is not contiguous, so it cannot be used.
■ Must use compaction to shift processes, so they are contiguous and all free memory is in one block.
– External fragmentation can be reduced by compaction or shuffle memory contents to
place all free memory together in one large block. To make compaction feasible, relocation
should be dynamic.
Dynamic Partitioning Contd…Compaction Example
■ The diagram shows how fragmentation can cause waste of memory and a compaction procedure
can be used to create more free memory out of fragmented memory –


Internal fragmentation
■ Suppose, memory block assigned to process is bigger. Some portion of memory is left unused, as
it cannot be used by another process.
Dynamic Partitioning : an example
■ A hole of 64K is left after
loading 3 processes: not
enough space for another
process.
■ Finally, each process is
blocked. The OS swaps out
process 2 to bring in process
4.
Dynamic Partitioning : an example
Contd…
■ another hole of 96K is created
■ Ultimately, each process is
blocked. The OS swaps out
process 1 to bring in again
process 2 and another hole of
96K is created...
■ Compaction would produce a
single hole of 256K
Dynamic Partitioning Contd…
Strengths
■ No internal fragmentation, more efficient use of main
memory.
Weaknesses
■ Inefficient use of processor due to the need for
compaction to counter external fragmentation.
Placement Algorithm
■ Used to choose which free block to allocate to a process
■ Goal: to reduce usage of compaction (time consuming
process)
■ Possible algorithms:
– Best-fit: chooses the block that is nearby in size to the
request.
– First-fit: begins to scan memory from the beginning and chooses the first available block that is
large enough.
– Next-fit: begins to scan memory from the location of the last placement and chooses the next
available block that is large enough.
Paging
■ Paging is a memory management performance in which address space of process is broken into
blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The
size of the process is measured in the number of pages.
■ Main memory is partition into equal fixed-sized spaces (relatively small size).
■ Also, each process is also divided into spaces of the same size called pages.
■ The pages can be assigned to the available spaces in main memory called frames (or page frames).
■ Consequence: a process (collection of pages) does not necessity to occupy a contiguous memory
space. The size of a frame is kept the same as that of a page to have optimum utilization of the main
memory and to avoid external fragmentation.
Paging Contd…
■ Now suppose that process B is
swapped out
Paging Contd…
■ When process A and C are blocked, the pager loads a
new process D consisting of 5 pages.
■ Process D does not occupied a contiguous portion of
memory.
■ There is no external fragmentation.
■ Internal fragmentation consist only of the last page of
each process.
■ Page address is called logical address and represented
by page number and the offset.
■ Logical address = Page Number + Page Offset
■ Frame address is called physical address and represented by a frame number and the offset.
■ Physical address = Frame Number + Page Offset
■ A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
Page Tables
■ The OS now needs to maintain (in main memory) a page table for each process.
■ Each entry of a page table consist of the frame number where the corresponding page is physically
located.
■ The page table is indexed by the page number to obtain the frame number.
■ A free frame list is maintained for available pages,
Paging Contd…
Logical address used in paging
■ Within each page, each logical address must consist of a page number and an offset
within the page.
■ A CPU register always holds the starting physical address of the page table of the
currently running process.
■ With the help of logical address (page number, offset), the processor accesses the
page table to obtain the physical address (frame number, offset)
■ The logical address becomes a relative address where the page size is a power of 2.
■ Example: if 16 bits addresses are used and page size = 1K, there needs 10 bits for
offset and rest 6 bits are available for page number
■ The 16 bit address obtained with the 10 least significant bit as offset and 6 most
significant bit as page number is a location relative to the beginning of the process.

Paging Contd…
Logical address in paging
■ The pages (page size of a power of 2) are invisible to the programmer or compiler.
■ Address translation at execution time is easy to implement in hardware
– logical address (n, m) gets translated to physical address (k, m) by indexing the page table
and appending the same offset m to the frame number k.
Logical-to-Physical Address Translation in Paging

Paging Contd…

Paging Contd…
Advantages
■ Paging is simple to implement and an efficient memory management method.
■ Paging reduces external fragmentation.
■ Due to the equal size of the pages and frames, swapping becomes very easy.
Disadvantages
■ Paging reduces external fragmentation, but still suffer from internal fragmentation.
■ The Page table requires extra memory space, so may not be good for a system having small RAM.
Segmentation
■ Segmentation is a memory management technique in which a program is subdivided into several
segments of different sizes, one for each module of program that perform specific function.
■ Each segment is a different logical address space of the program.
■ When a process loaded into the main memory, its different segments can be stored anywhere.
■ Each segment is fully packed with data, so no internal fragmentation is occurred.
■ The external fragmentation is reduced when using small segments.
■ When a process is to be executed, its corresponding segments are loaded into non-contiguous
memory space, however one segment is loaded into a contiguous block of available memory.
■ Segmentation works very similar to paging but here segments are of variable-length where as in
paging, pages are of fixed size.
■ A program segment contains the program's main function, utility functions, data structures, and so
on. The OS maintains a segment map table for every process and a list of free memory blocks along
with segment numbers, their size and corresponding memory locations in main memory. For each
segment, the table stores the starting address of segment and the length of segment. A reference to
a memory location includes a value that identifies a segment and an offset.
Segmentation Contd…
■ For each segment, table stores the starting
address of the segment and the length of
the segment. A reference to a memory
location includes a value that identifies a
segment and an offset.

Memory Management Techniques

Logical address used in segmentation


■ When a process enters into the Running state, a CPU register is loaded with the starting address of
the segment table of a process.
■ The logical address is presented with a (segment number, offset) = (n, m), the segment table
obtains the starting physical address k and the length l of that segment.
■ The physical address is obtained by adding m to k (in contrast with paging)
– the hardware also compares the offset m with the length l of that segment to determine
whether the address is valid or not.
Logical-to-Physical Address Translation in segmentation
Simple segmentation and paging comparison
■ Segmentation needs more complex hardware for address translation.
■ Segmentation suffers from external fragmentation.
■ Paging only produce a small internal fragmentation.
■ Segmentation is visible to the programmer but paging is transparent.
Memory Management Techniques
Demand Paging
■ A demand paging is a quite similar process with paging system with reference to swapping where
the processes reside in secondary memory and pages are loaded back into main memory only on
demand, not in advance. When a context switching occurs, the OS does not copy any of the old
program’s pages out to the secondary memory or any of the new program’s pages into the main
memory Instead, it just begins executing the new program after loading the first page and fetches
that program’s pages as they are referenced.

Demand Paging Contd…


■ While executing a program, if the program references a page which is not available in the main
memory because it was swapped out a slight ago, the processor treats this invalid memory
reference as a page fault and transfers control from the program to the OS to demand the page
back into the memory.
Advantages
■ More efficient use of memory.
■ There is no limit on degree of multiprogramming.
Disadvantages
■ Number of tables and the amount of processor overhead are greater than in the case of the simple
paging method for management of page interrupts.

Page Replacement Algorithm


■ Page replacement algorithms are the procedures in which an OS decides which memory pages to
swap out from main memory to secondary memory when a page of memory needs to be assigned.
■ Paging occurs when a page fault happens and a free page cannot be used for allocation purpose
accounting to that reason pages are not available or the number of free pages is lower than
required pages.
■ A page replacement algorithm appearances at the partial information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the total
number of page misses, while balancing it with costs of primary storage and processor time of the
algorithm.
■ There are different page replacement algorithms.
■ These algorithms are evaluated by running it on a specific pages of memory reference and
calculating the number of page faults,

First In First Out (FIFO) algorithm


■ Oldest page in main memory is selected for replacement.
■ Easy to implement, have a list to replace pages from the end and add new pages at the head.

Least Recently Used (LRU) algorithm


■ The page which has not been used for the longest time in main memory is the one which will be
selected for replacement.
■ Easy to implement, have a list to replace pages by observing back into time.
Optimal Page Replacement algorithm
■ Optimal page-replacement algorithm has the lowermost page-fault rate
■ Replace the page that will not be used for the longest period of time. Use the time when a page is
to be used.

File System
■ A file is a group of connected information that is stored in secondary memory such as magnetic
tapes, magnetic disks and optical disks.
■ A file is a sequence of bits, bytes, lines or records whose meaning is defined by the files maker and
user.
File Structure
■ A file is a defined structure according to its type.
■ A text file is a sequence of characters organized into lines.
■ A source file is a sequence of methods and functions.
■ An object file is a sequence of bytes organized into blocks that are understandable by the computer.
■ When OS defines different file structures, it also contains the code to support these file structure.
■ Unix, MS-DOS support minimum number of file structure.

File Access Mechanisms


■ It refers to the way in which records of a file may be accessed. Several ways to access files −
Sequential access
■ A sequential access is that in which the records are accessed in some sequence, i.e., the information
in the file is managed in order, one record after the other. This access method is the most primitive
one.
■ Example: Compilers usually access files in this fashion.
Direct/Random access
■ Random access file organization offers, accessing the records directly.
■ Each record has its own address on the file by the help of which it can be directly accessed for
reading or writing.
■ The records essential not be in any sequence within the file and they need not be in adjacent
locations on the storage medium.
Indexed sequential access
■ This mechanism is made up on base of sequential access.
■ An index is created for each file which contains pointers to various blocks.
■ Index is searched sequentially and its pointer is used to access the file directly.

You might also like