0% found this document useful (0 votes)
5 views

BCA OS Unit3

The document provides an overview of main memory management techniques, including swapping, contiguous memory allocation, segmentation, and paging. It discusses virtual memory concepts, demand paging, page replacement algorithms, and frame allocation strategies. Additionally, it highlights issues like thrashing, which occurs when a system spends excessive time transferring pages between memory and storage.

Uploaded by

swapna
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

BCA OS Unit3

The document provides an overview of main memory management techniques, including swapping, contiguous memory allocation, segmentation, and paging. It discusses virtual memory concepts, demand paging, page replacement algorithms, and frame allocation strategies. Additionally, it highlights issues like thrashing, which occurs when a system spends excessive time transferring pages between memory and storage.

Uploaded by

swapna
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Unit III

Main Memory: Background, Swapping, Contiguous Memory Allocation, Segmentation, Paging,


Structure of the Page Table.
Virtual Memory: Background, Demand Paging, Page Replacement, Allocation of Frames,
Thrashing, Memory-Mapped Files, Mass-Storage Structure, Overview of Mass- Storage Structure,
Disk Structure, Disk Attachment, Disk Scheduling, Disk Formatting, RAID Structure

Main Memory

Introduction
Main memory (RAM) is an important resource that must be very carefully managed. The part of the
operating system that manages the memory hierarchy is called the memory manager. Its job is to
efficiently manage memory: keep track of which parts of memory are in use, allocate memory to
processes when they need it, and deallocate it when work is completed.
Swapping

A process must be in memory to be executed. A process, however, can be swapped temporarily out of
memory to a backing store and then brought back into memory for continued execution as shown in
the Figure.

Standard Swapping

Standard swapping involves moving processes between main memory and a backing store. The
backing store should be a fast disk. The system maintains a ready queue consisting of all processes
whose memory images are on the backing store or in memory. Whenever the CPU scheduler decides
to execute a process, it calls the dispatcher. The dispatcher checks to see whether the next process in
the queue is in memory. If it is not, and if there is no free memory region, the dispatcher swaps out a
process currently in memory and swaps in the desired process.

Figure 8.5 Swapping of


two processes using a disk as a backing store.
Contiguous Memory Allocation

The main memory must contain both the operating system and the user processes. Therefore we need
to allocate main memory in the most efficient way. The memory is usually divided into two partitions:
one for the resident operating system and one for the user processes.

In contiguous memory allocation, each process is contained in a single section of memory that is
contiguous to the section containing the next process.
The Relocation registers are used to protect user processes from each other.
Base register contains value of physical address (address seen by the memory unit).
Limit register contains range of logical addresses – each logical address (address generated by CPU)
must be less than the limit register.

MMU (Memory Management Unit) maps logical address dynamically


Memory Allocation

The simplest method for allocating memory is to divide memory into several fixed-sized partitions.
Each partition may contain exactly one process. When a partition is free, a process is selected from the
input queue and is loaded into the free partition. When the process terminates, the partition becomes
available for another process.

In the variable-partition scheme, the operating system keeps a table indicating which parts of
memory are available and which are occupied. Initially, all memory is available for user processes and
is considered one large block of available memory, a hole. Memory contains a set of holes of different
sizes.

The operating system takes into account the memory requirements of each process and the amount of
available memory space in determining which processes are allocated memory. At any given time,
then, we have a list of available block sizes and an input queue. The operating system can order the
input queue according to a scheduling algorithm.

The first-fit, best-fit, and worst-fit strategies are the ones most commonly used to
select a free hole from the set of available holes.
• First fit. Allocate the first hole that is big enough.
• Best fit. Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.
• Worst fit. Allocate the largest hole. Again, we must search the entire list. This strategy
produces the largest leftover hole.

SEGMENTATION
A Memory Management technique in which memory is divided into variable sized chunks which can
be allocated to processes. Each chunk is called a Segment. A table stores the information about all such
segments and is called Segment Table.
Segment Table – It maps two dimensional Logical address into one dimensional Physical address. It’s
each table entry has:

 Base Address: It contains the starting physical address where the segments reside in memory.
 Limit: It specifies the length of the segment.
Advantages of Segmentation :

 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation :

 As processes are loaded and removed from the memory, the free memory space is broken into
little pieces, causing External fragmentation.

Paging
Paging is a memory management scheme by which a computer stores and retrieves data
from secondary storage for use in main memory.
Paging involves breaking physical memory into fixed-sized blocks called frames and breaking
logical memory into blocks of the same size called pages. When a process is to be executed, its pages
are loaded into any available memory frames from their source (a file system or the backing store).

The hardware support for paging is illustrated in Figure 8.10. Every address generated by the CPU is
divided into two parts: a page number (p) and a page offset (d). The page table contains the base
address of each page in physical memory.

Figure 8.10 Paging


hardware.
Virtual memory

Virtual Memory is a storage mechanism which offers user an illusion of having a very big
main memory. It is done by treating a part of secondary memory as the main memory. In Virtual
memory, the user can store processes with a bigger size than the available main memory.

Virtual memory abstracts main memory into an extremely large, uniform array of storage,
separating logical memory as viewed by the user from physical memory.

This technique frees programmers from the concerns of memory-storage limitations. Virtual
memory also allows processes to share files easily and to implement shared memory. Virtual memory
is mostly implemented with demand paging and demand segmentation.

Virtual Memory That is Larger than Physical Memory


Demand Paging

A demand paging mechanism is very much similar to a paging system with swapping where processes
stored in the secondary memory and pages are loaded only on demand, not in advance.
PAGE REPLACEMENT ALGORITHMS
A page fault occurs when a program attempts to access data or code that is in its address space, but is
not currently located in the RAM. The operating system has to choose a page to remove from memory
to make space for the incoming page. If the page to be removed has been modified while in memory,
it must be rewritten to the disk to bring the disk copy up to date.
The following are the page replacement algorithms:
1) The First-In, First-Out (FIFO) Page Replacement Algorithm
The operating system maintains a list of all pages currently in memory, with the most recent arrival at
the tail and the least recent arrival at the head. On a page fault, the page at the head is removed and the
new page added to the tail of the list. For example, consider the reference string
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 for a memory with three frames (3 pages can be in
memory at a time per process)

2) The Optimal Page Replacement Algorithm

The optimal page replacement algorithm says that the page with the highest label should be removed.
If one page will not be used for 8 million instructions and another page will not be used for 6 million
instructions, removing the former pushes the page fault that will fetch it back as far into the future as
possible.

3) The Least Recently Used (LRU) Page Replacement Algorithm


When a page fault occurs, throw out the page that has been unused for the longest time. This strategy
is called LRU (Least Recently Used) paging.
4) The Not Recently Used Page Replacement Algorithm

The NRU (Not Recently Used) algorithm removes a page at random from the lowest numbered
nonempty class. The idea of this algorithm is that it is better to remove a modified page that has not
been referenced in at least one clock tick.

5) The Second-Chance Page Replacement Algorithm


The second chance page replacement algorithm looks for an old page that has not been referenced in
the most recent clock interval. If all the pages have been referenced, second chance degenerates into
pure FIFO.

Page loaded first


Most recently
0 3 7 8 12 14 15 18

loaded page

A B C D E F G H

(a)
A is treated
like a
3 7 8 12 14 15 18 20
newly loaded
page

B C D E F G H A

(b)
6) The Clock Page Replacement Algorithm
A better approach is to keep all the page frames on a circular list in the form of a clock, as shown
in Fig. 3-16. The hand points to the oldest page.

Figure 3-16. The clock page replacement algorithm.


When a page fault occurs, the page being pointed to by the hand is inspected. If its R bit is 0, the
page is evicted, the new page is inserted into the clock in its place, and the hand is advanced one
position. If R is 1, it is cleared and the hand is advanced to the next page. This process is repeated until
a page is found with R  0. Not surprisingly, this algorithm is called clock.

Allocation of Frames

The Main Memory is divided into parts called as 'Frames' and the process is divided into 'Pages' so that
a part of process (a page) can be accommodated in a frame. A Page Table keeps track of the pages and
where they are present in the Main Memory. When a page fault occurs, there is a free frame available
to store a new page into a frame.

Two major allocation Algorithm/schemes are.


1. Equal allocation
2. Proportional allocation

Equal allocation: The easiest way to split m frames among n processes is to give everyone an equal
share, m/n frames. This is known as equal allocation.

proportional allocation: Here, it allocates available memory to each process according to its size. Let
the size of the virtual memory for process pi be si, and define S= ∑ Si

Then, if the total number of available frames is m, we allocate ai frames to process pi, where ai is
approximately ai = Si/ S x m.
Global Versus Local Allocation

We can classify page replacement algorithms into two broad categories: global replacement and local
replacement.

Global replacement allows a process to select a replacement frame from the set of all frames, even if
that frame is currently allocated to some other process; one process can take a frame from another.

Local replacement requires that each process selects from only its own set of allocated frames.

Thrashing:

The system spends most of its time transferring pages between main memory and secondary memory
due to frequent page faults. This behavior is known as thrashing. A process is thrashing if it is
spending more time paging than executing. This leads to low CPU utilization.

(Thrashing)

You might also like