0% found this document useful (0 votes)
19 views29 pages

Virtual Memory Management

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views29 pages

Virtual Memory Management

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Virtual Memory Management

Background
• The process being execution must be loaded in main memory.
• This requires placing the entire logical address space of program in physical
memory.
• Since the physical memory space is limited, so the size of program limits to the
size of physical memory.
• Therefore, we need to use techniques that enable us to execute the program
without loading all the logical address space.
• There are the situations, where program needs to be in memory to execute, but
entire program rarely used. That means entire program is not required to be
loaded fully in main memory.
1. User written error handling routines are used only when an error occurred
in the data or computation.
2. Certain options and features of a program may be used rarely.
3. Many tables are assigned a fixed amount of address space even though only
a small amount of the table is actually used.
Background
• The ability to execute a program that is only partially in memory would counter
many benefits.
• A program would no longer be constrained by the amount of physical
memory that is available.
• Each user program could take less physical memory, more programs could be
run at the same time, with a corresponding increase in CPU utilization and
throughput.
• A computer can address more memory than the amount of memory physically
installed on the system. This needs a section of a hard disk that's set up to
reproduce the function of computer's RAM (main memory).
• Virtual Memory is a storage allocation scheme in which secondary memory can
be used as part of the main memory.
• Virtual memory involves separation of user logical memory from physical
memory.
• This separation allows large virtual memory for programmers when a smaller
physical memory is available.
Virtual Memory That is Larger Than Physical Memory
Background
[Physical memory refers to the actual RAM of the system attached to the
motherboard, but the virtual memory is a memory management technique that
allows the execution of processes which are not completely available in memory.]
Virtual address space – logical view of how process is stored in memory (The
collection of logical or virtual addresses that is available to a process is called the
virtual address space for the process.)
• Usually start at address 0, contiguous addresses until
end of space
• The virtual address space is kept in secondary storage
(disk).
• The operating system maintains an image of the virtual
address space in secondary storage. Because an image
of the address space is kept in secondary storage, it can
be larger than the physical memory.
Background
• Unused address space between the heap and stack is called hole, also known as
sparse address space.
• The benefit of sparse address space is that it can be filled as the stack and heap
segments grow or libraries can link dynamically during execution of programs.
Advantages:
• Only part of the program needs to be in memory for execution
• Logical address space can therefore be much larger than physical address
space
• Allows address spaces to be shared by several processes - Virtual memory
allows one process to create a region of memory that it can share with
another processes. Processes sharing this region consider it, part of their
virtual address space.
Virtual memory can be implemented via:
1. Demand paging
2. Demand segmentation
Demand Paging
• One option to execute a program is to load the entire program in physical memory
at program execution. A problem with this option is that we may not initially need
the entire program in memory.
Ex: Program with list of options from which the user is to select. Loading the entire
program into memory results in loading the executable code for all options, regardless
of whether the option is selected by the user or not.
• An alternative strategy is to load pages only as they are needed. This technique is
known as demand paging and commonly used in virtual memory systems.
Demand paged virtual memory:
• Pages are only loaded when they are
demanded during execution.
• Pages are never accessed, are never loaded
into physical memory.
Similar to paging system with swapping processes
reside in a secondary memory (disk).
Demand Paging
• When a process is to be executed , instead of swapping entire process a lazy
swapper is used to load page in memory that is needed.
• Lazy swapper called pager never swaps a page into memory unless the page
will be needed.
• When the process is to be swapped in, the pager guesses which pages will be
used before the process is swapped out. So instead of swapping in a whole
process, the pager brings only those pages into memory.
• This decreases the swap time and the amount of physical memory needed.
• This need hardware support to distinguish between the pages that are in
memory and the pages that are on the disk (not in memory).
• The valid-invalid bit scheme can be used for this purpose.
• This bit is set to “valid” when associated page is both legal and in memory and
bit is set “invalid”, to indicate the page either is not valid(not in the logical
address space of process) or valid but is currently on the disk.
Page Table When Some Pages Are Not in Main Memory
• The page table entry for a page that is brought into memory is set as “valid”, but
the page table entry for a page that is not currently in memory is either marked
“invalid” or contains the address of the page on disk.
Page Fault
• Marking a page invalid will have no effect if the process never attempt to access that
page.
But what happen when process tries to access a page that was not brought into
memory?
• Access to page marked invalid causes a page fault.
• The paging hardware , in translating the address through the page table, will notice
that the invalid bit is set, causing a trap to the OS.
Steps in Handling a Page Fault
• If there is a reference to a page, first reference to that page will trap to
operating system: page fault
1. Operating system looks at another table to decide:
• Invalid reference  abort
• Just not in memory
2. Find the location of the desired page on the disk.
3. Find free frame
4. Swap page into frame via scheduled disk operation
5. Reset tables to indicate page now in memory
Set validation bit = v
6. Restart the instruction that caused the page fault
Aspects of Demand Paging
• Extreme case – start process with no pages in memory
• OS sets instruction pointer to first instruction of process, which is non-
memory-resident -> page fault. After this page is brought into memory and
then process continue to execute.
• And for every other process pages on first access page fault occurs.
• This scheme is called Pure demand paging

• Hardware support needed for demand paging


• Page table with valid / invalid bit
• Secondary memory: To hold the pages that are not in main memory. The
secondary memory is high speed disk known as swap device and section
of disk used for this purpose is known as swap space.
• Instruction restart after page fault.
Performance of Demand Paging
• Demand paging affect the performance of computer system.
• Effective Access Time (EAT) is computed for demand paged memory.
• ma => memory access time
• As long as no page faults => EAT = ma.
• If page fault occurs => read relevant page from disk to memory and then access
the desired word from
• Let p be the probability of a page fault, Page Fault Rate 0  p  1
• if p = 0 no page faults
• if p = 1, every reference is a fault
• Then Effective Access Time (EAT) = p x page fault time + (1-p) x ma
• Page fault time = Amount of time needed to service a page fault
Ex: Memory access time = 200ns and average page fault service time = 8ms, then
EAT = p x 8ms + (1-p) x 200ns = p x 8,000,000ns + 200 – 200ns x p
= 200 + 7,999,800 x p i.e. EAT is directly proportional to the page fault rate
• If 1 access out of 1000 causes page fault, what is EAT?
Performance of Demand Paging
• EAT = 200 + 7,999,800 x p (ns)
= 200 + 7,999,800 x 0.001
= 200 + 7999.8 = 8199.8 ns
= 8199.8 / 1000 microseconds
= 8.19 = 8.2 microseconds

The computer will be slow down by a factor of 40 because of demand paging. i.e

8199/200 = 40.99
Page Replacement
• In page fault rate, assumed that each page faults at most once, when it is first
referenced. But it is not strictly accurate.
Ex: If process of 10 pages actually uses only half of them, then demand paging
saves I/O necessary to load the 5 pages that are never used.
• It increases degree of multiprogramming by running many processes.
Ex: If there are 40 frames, then 8 processes can run, rather than 4 processes that
could run if each required 10 frames(5 of which are never used).
• If degree of multiprogramming is increased, memory is over-allocated.
Over-allocation of memory demonstrate itself as follows:
• While a user process is executing, a page fault occurs.
• The OS determines where the desired page is residing on the disk but then finds
that there are no frames on the free-frame list; all memory is in use.
Need For Page Replacement
Page Replacement
The OS has several options:
1. Terminate the user process, decreasing CPU utilization and throughput.
2. Swap out the process, freeing all its frames and reducing the degree of
multiprogramming.
3. Page replacement
Basic Page Replacement:
Page Replacement technique uses the following approach-
• If there is no free frame, then find the one that is not currently being used and
then free it.
• A frame can be freed by writing its content to swap space and then change the
page table in order to indicate that the page is no longer in the memory.
• Use the free frame to hold the page for which page fault occurs.
• The page fault service routine can be modified to include page replacement:
Basic Page Replacement
1. First of all, find the location of the
desired page on the disk.
2. Find a free Frame:
a) If there is a free frame, then use it.
b) If there is no free frame then make
use of the page-replacement algorithm
in order to select the victim frame.
c) Then after that write the victim
frame to the disk and then make the
changes in the page table and frame
table accordingly.
3. After that read the desired page into
the newly freed frame and then change
the page and frame tables.
4. Restart the process.
Basic Page Replacement – Modify Bit
• When page fault occurs and no frames are free, two page transfers (one out and
one in) are required.
• This situation effectively doubles the page-fault service time and increases the
EAT accordingly.
• This can be reduced by using modify bit or dirty bit with each page or frame in
the hardware.
• The modify bit for page is set by hardware whenever any word or byte written
into it, indicating that the page has been modified.
• When page is selected for replacement , its modify bit is examined.
• If the bit is set, page has been modified since it was read in from the disk. In
this case, the page must be written to the disk.
• If the dirty bit is not set, however, the page has not been modified since it was
read into memory. In this case, , the page is not needed to write to the disk; it is
already there.
• Thus this scheme reduce the time require to service a page fault, since it
reduces I/O time by one-half if the page has not been modified.
Page Replacement Algorithms
• There are many different page replacement algorithms.
• Evaluate algorithm by running it on a particular string of memory references
(called reference string) and computing the number of page faults on that
string
• String is just page numbers, not full addresses
• Repeated access to the same page does not cause a page fault
• Results depend on number of frames available
• To determine the number of page faults for a particular reference string and
page replacement algorithm, we need to know number of page frames
available.
• As the number of page frames available increases, the number of page faults
decreases.
• The expected curve is
as shown in figure.
FIFO Page Replacement
• The simplest page replacement algorithm.
• In this algorithm, the operating system keeps track of all pages in the memory in
a queue, the oldest page is in the front of the queue.
• When a page needs to be replaced, page in the front of the queue is selected
for removal.
Ex: Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
3 frames (3 pages can be in memory at a time per process)

Number of page faults = 15


FIFO Page Replacement
• FIFO easy to understand, but performance is not always good.
• If active page is replaced with new one, a fault occurs immediately to retrieve
the active page. Some other page must be replaced to bring the active page
back into memory.
• Thus bad replacement choice increases the page fault rate and slows process
execution.
Ex: Reference string: 1,2,3,4,1,2,5,1,2,3,4,5
• Adding more frames can cause
more page faults!
• Belady ’ s Anomaly : For
some Page replacement
algorithms, the page fault
rate increases as the
number of frames
increases.
FIFO Page Replacement
1. Reference string: 1,2,3,4,1,2,5,1,2,3,4,5 and frames = 3
1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 4 4 4 5 5 5
2 2 2 1 1 1 3 3
3 3 3 2 2 2 4

Number of page faults = 9

2. Frames = 4 , Number of page faults = ?

3. Reference string: 4 7 6 1 7 6 1 2 7 2 7 1 and frames = 3

Advantages :

Disadvantages:
Optimal Page Replacement (OPT)
• It is result of the discovery of Belady’s Anomaly .
• It has lowest page fault rate of all algorithms will never suffer from Belady’s
Anomaly.
• The main idea of this algorithm is simple: for every reference
1. If referred page is already present, increment hit count.
2. If not present, find if a page that is never referenced in future. If such a page
exists, replace this page with new page.
3. If no such page exists, find a page that is referenced farthest in future.
4. Replace this page with new page.
Ex: Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
Frames = 3

Number of page
faults = 9
Optimal Page Replacement (OPT)
• Optimal page replacement it achieves the minimum number of page faults in
theory. However, it is not practical due to the need of knowledge of future
memory references.
Ex:
1. Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 and page frames = 4
LRU Page Replacement
• LRU – Least Recently Used
• The key distinction between FIFO and OPT algorithms is that the FIFO algorithm
uses the time when a page was brought into memory, whereas the OPT
algorithm uses the time when a page is to be used.
• The LRU algorithm replace the page that has not been used for the longest
period of time.
• So LRU replacement associates with each page the time of that page’s last use.
• This strategy can be think as the optimal page replacement algorithm looking
backward in time, rather than forward.
Ex: 1. Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 and Frames = 3

Number of page
faults = 12
LRU Page Replacement
2. Reference String: 7 0 2 4 3 1 4 7 2 0 4 3 0 3 2 7 and frames = 3
3. Reference String: 0 1 2 3 0 1 4 0 1 2 3 4 and frames = 3

• LRU is often used as a page replacement algorithm and is considered to be


good.
• The major problem is how to implement LRU replacement. It requires hardware
assistance.
Two implementations are possible: using counters and using stack.
LRU Page Replacement
Counter implementation:
• Each page table entry is associated with a time-of-use field and add to the CPU
clock or counter.
• Clock is incremented for every page reference.
• Whenever a reference to a page is made, the contents of the clock register are
copied to the time-of-use field in the page table entry for that page.
• This provide the “time” of the last reference to each page.
• Replace the page with smallest time value.
• This scheme requires -
• Write to the time-of-use field in page table, for each memory access.
• A search of the page table to find the LRU page and
LRU Page Replacement
Stack implementation:
• Keep a stack of page numbers in a double link form with head and tail pointers.
• Whenever page referenced:
• It is removed from the stack and put on the top
• In this way most recently page is always on the top of the stack and the least
recently page is always at the bottom.
• But each update is little more expensive
• No search for replacement; the tail pointer points to the bottom of the stack,
which is LRU page.
• Like optimal replacement, LRU replacement does not suffer from Belady’s
Anomaly .

You might also like