0% found this document useful (0 votes)
15 views

9 Virtual Memory

Uploaded by

Prajwal Kandel
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

9 Virtual Memory

Uploaded by

Prajwal Kandel
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

Virtual Memory

Virtual memory
 The requirement that instructions must be in physical
memory to be executed seems both necessary and
reasonable; but it is also unfortunate, since it limits the size
of a program to the size of physical memory. In fact, an
examination of real programs shows us that, in many cases,
the entire program is not needed.
 Even in those cases where the entire program is needed, it
may not all be needed at the same time.
 Virtual memory is a system by which the machine or
operating system fools processes running on the machine
into thinking that they have a lot more memory to work with
than the capacity of RAM would indicate.
 It does this by storing the most recently used items in RAM,
and storing the lesser used items in the slower disk
memory, and interchanging data between the two whenever
a disk access is made.
Benefits of Virtual memory
 A program would no longer be constrained by the amount
of physical memory that is available. Users would be able
to write programs for an extremely large virtual address
space, simplifying the programming task.
 Because each user program could take less physical
memory, more programs could be run at the same time,
with a corresponding increase in CPU utilization and
throughput
 Less I/O would be needed to load or swap each user
program into memory, so each user program would run
faster.
 Virtual memory can be implemented via:
 Demand paging
 Demand segmentation
Virtual Memory That is Larger Than Physical
Memory


Virtual-address Space
 The virtual address space of a process refers to the
logical (or virtual) view of how a process is stored in
memory.
 Typically, this view is that a process begins at a
certain logical address—say, address 0—and exists in
contiguous memory,
 In fact physical memory may be organized in page
frames and that the physical page frames assigned to
a process may not be contiguous.
 It is up to the memory management unit (MMU) to
map logical pages to physical page frames in
memory.
Virtual-address Space
Shared Library Using Virtual
Memory
 In addition to separating logical memory from physical memory,
virtual memory also allows files and memory to be shared by
two or more processes through page sharing. This leads to the
following benefits:
 System libraries can be shared by several processes through
mapping of the shared object into a virtual address space.
Although each process considers the shared libraries to be part
of its virtual address space, the actual pages where the
libraries reside in physical memory are shared by all the
processes. Typically, a library is mapped read-only into the
space of each process that is linked with it.
 Similarly, virtual memory enables processes to share memory.
Virtual memory allows one process to create a region of
memory that it can share with another process. Processes
sharing this region consider it part of their virtual address
space, yet the actual physical pages of memory are shared,
much as is illustrated in Figure 9.3.
 Virtual memory can allow pages to be shared during process
creation with the fork() system call, thus speeding up process
creation.
Shared Library Using Virtual
Memory
Demand Paging
 Consider how an executable program might be loaded from disk
into memory. One option is to load the entire program in
physical memory at program execution time. However, a
problem with this approach, is that we may not initially need
the entire program in memory.
 An alternative strategy is to initially load pages only as they are
needed. This technique is known as demand paging and is
commonly used in virtual memory systems.
 With demand-paged virtual memory, pages are only loaded
when they are demanded during program execution; pages that
are never accessed are thus never loaded into physical
memory.
 A demand-paging system is similar to a paging system with
swapping where processes reside in secondary memory
(usually a disk).
 When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory
Demand Paging
 Lazy swapper –A lazy swapper never swaps a page into
memory unless that page will be needed.
 Since we are now viewing a process as a sequence of
pages, rather than as one large contiguous address
space, use of the term swapper is technically incorrect.
 A swapper manipulates entire processes, whereas a
pager is concerned with the individual pages of a process.
We thus use pager, rather than swapper, in connection
with demand paging.
Transfer of a Paged Memory to Contiguous Disk
Space
Valid-Invalid Bit
 With each page table entry a valid–invalid bit is associated
(v  in-memory, i  not-in-memory)
 Initially valid–invalid bit is set to i on all entries
 Example of a page table snapshot:

Frame # valid-invalid bit


v
v
v
v
i
….

i
i
page table

 During address translation, if valid–invalid bit in page table


entry
is I  page fault
Page Table When Some Pages Are Not in Main
Memory
Page Fault

 If there is a reference to a page, first reference


to that page will trap to operating system:
page fault
1. Operating system looks at another table to
decide:
 Invalid reference  abort
 Just not in memory
2. Get empty frame
3. Swap page into frame
4. Reset tables
5. Set validation bit = v
6. Restart the instruction that caused the page
fault
Steps in Handling a Page Fault
What happens if there is no free frame?

 Page replacement – find some page in memory,


but not really in use, swap it out
 algorithm
 performance – want an algorithm which will
result in minimum number of page faults
 Same page may be brought into memory several
times
Page Replacement
 Prevent over-allocation of memory by modifying page-
fault service routine to include page replacement

 Use modify (dirty) bit to reduce overhead of page


transfers – only modified pages are written to disk

 Page replacement completes separation between


logical memory and physical memory – large virtual
memory can be provided on a smaller physical
memory
Need For Page Replacement
Basic Page Replacement

1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page
replacement algorithm to select a victim
frame

3. Bring the desired page into the (newly) free


frame; update the page and frame tables

4. Restart the process


Page Replacement
Page Replacement Algorithms

 Want lowest page-fault rate

 Evaluate algorithm by running it on a


particular string of memory references
(reference string) and computing the
number of page faults on that string

 In all our examples, the reference string


is

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Graph of Page Faults Versus The Number of
Frames
First-In-First-Out (FIFO) Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 3 frames (3 pages can be in memory at a time per
process)

1 1 4 5
2 2 1 3 9 page faults
3 3 2 4

 4 frames
1 1 5 4
2 2 1 5 10 page faults
3 3 2

4 4 3

 Belady’s Anomaly: more frames  more page faults


FIFO Page Replacement
FIFO Illustrating Belady’s Anomaly
Optimal Algorithm
 Replace page that will not be used for longest period
of time
 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 4
6 page
2
faults
3

4 5

 How do you know this?


 Used for measuring how well your algorithm performs
Optimal Page Replacement
Least Recently Used (LRU)
Algorithm
 Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

1 1 1 1 5
2 2 2 2 2
3 5 5 4 4
4 4 3 3 3

 Counter implementation
 Every page entry has a counter; every time page is
referenced through this entry, copy the clock into
the counter
 When a page needs to be changed, look at the
counters to determine which are to change
LRU Page Replacement
LRU Algorithm (Cont.)
 Stack implementation – keep a stack of page numbers
in a double link form:
 Page referenced:
 move it to the top
 requires 6 pointers to be changed
 No search for replacement
Use Of A Stack to Record The Most Recent Page
References
LRU Approximation Algorithms
 Reference bit
 With each page associate a bit, initially = 0
 When page is referenced bit set to 1
 Replace the one which is 0 (if one exists)
 We do not know the order, however
 Second chance
 Need reference bit
 Clock replacement
 If page to be replaced (in clock order) has reference
bit = 1 then:
 set reference bit 0
 leave page in memory
 replace next page (in clock order), subject to same
rules
Second-Chance (clock) Page-Replacement
Algorithm
Counting Algorithms
 Keep a counter of the number of references that
have been made to each page

 LFU Algorithm: replaces page with smallest


count

 MFU Algorithm: based on the argument that the


page with the smallest count was probably just
brought in and has yet to be used
Allocation of Frames

 Each process needs minimum number of pages


 Example: IBM 370 – 6 pages to handle SS MOVE
instruction:
 instruction is 6 bytes, might span 2 pages
 2 pages to handle from
 2 pages to handle to
 Two major allocation schemes
 fixed allocation
 priority allocation
Fixed Allocation

 Equal allocation – For example, if there are 100 frames


and 5 processes, give each process 20 frames.
 Proportional allocation – Allocate according to the size
of sprocess
i  size of process pi
S   si
m  total number of frames
s
ai  allocation for pi  i  m
S
m  64
si  10
s2  127
10
a1   64  5
137
127
a2   64  59
137
Priority Allocation

 Use a proportional allocation scheme using


priorities rather than size

 If process Pi generates a page fault,


 select for replacement one of its frames
 select for replacement a frame from a
process with lower priority number
Global vs. Local Allocation

 Global replacement – process selects a


replacement frame from the set of all frames;
one process can take a frame from another
 Local replacement – each process selects from
only its own set of allocated frames
Thrashing

 If a process does not have “enough” pages, the page-


fault rate is very high. This leads to:
 low CPU utilization
 operating system thinks that it needs to increase
the degree of multiprogramming
 another process added to the system

 Thrashing  a process is busy swapping pages in and


out
Thrashing (Cont.)
Working-Set Model
   working-set window  a fixed number of page
references
Example: 10,000 instruction
 WSSi (working set of Process Pi) =
total number of pages referenced in the most
recent  (varies in time)
 if  too small will not encompass entire locality
 if  too large will encompass several localities
 if  =   will encompass entire program
 D =  WSSi  total demand frames
 if D > m  Thrashing
 Policy if D > m, then suspend one of the processes
Working-set model
Keeping Track of the Working Set
 Approximate with interval timer + a reference bit
 Example:  = 10,000
 Timer interrupts after every 5000 time units
 Keep in memory 2 bits for each page
 Whenever a timer interrupts copy and sets the
values of all reference bits to 0
 If one of the bits in memory = 1  page in working
set
 Why is this not completely accurate?
 Improvement = 10 bits and interrupt every 1000 time
units

You might also like