9 Virtual Memory
9 Virtual Memory
Virtual memory
The requirement that instructions must be in physical
memory to be executed seems both necessary and
reasonable; but it is also unfortunate, since it limits the size
of a program to the size of physical memory. In fact, an
examination of real programs shows us that, in many cases,
the entire program is not needed.
Even in those cases where the entire program is needed, it
may not all be needed at the same time.
Virtual memory is a system by which the machine or
operating system fools processes running on the machine
into thinking that they have a lot more memory to work with
than the capacity of RAM would indicate.
It does this by storing the most recently used items in RAM,
and storing the lesser used items in the slower disk
memory, and interchanging data between the two whenever
a disk access is made.
Benefits of Virtual memory
A program would no longer be constrained by the amount
of physical memory that is available. Users would be able
to write programs for an extremely large virtual address
space, simplifying the programming task.
Because each user program could take less physical
memory, more programs could be run at the same time,
with a corresponding increase in CPU utilization and
throughput
Less I/O would be needed to load or swap each user
program into memory, so each user program would run
faster.
Virtual memory can be implemented via:
Demand paging
Demand segmentation
Virtual Memory That is Larger Than Physical
Memory
Virtual-address Space
The virtual address space of a process refers to the
logical (or virtual) view of how a process is stored in
memory.
Typically, this view is that a process begins at a
certain logical address—say, address 0—and exists in
contiguous memory,
In fact physical memory may be organized in page
frames and that the physical page frames assigned to
a process may not be contiguous.
It is up to the memory management unit (MMU) to
map logical pages to physical page frames in
memory.
Virtual-address Space
Shared Library Using Virtual
Memory
In addition to separating logical memory from physical memory,
virtual memory also allows files and memory to be shared by
two or more processes through page sharing. This leads to the
following benefits:
System libraries can be shared by several processes through
mapping of the shared object into a virtual address space.
Although each process considers the shared libraries to be part
of its virtual address space, the actual pages where the
libraries reside in physical memory are shared by all the
processes. Typically, a library is mapped read-only into the
space of each process that is linked with it.
Similarly, virtual memory enables processes to share memory.
Virtual memory allows one process to create a region of
memory that it can share with another process. Processes
sharing this region consider it part of their virtual address
space, yet the actual physical pages of memory are shared,
much as is illustrated in Figure 9.3.
Virtual memory can allow pages to be shared during process
creation with the fork() system call, thus speeding up process
creation.
Shared Library Using Virtual
Memory
Demand Paging
Consider how an executable program might be loaded from disk
into memory. One option is to load the entire program in
physical memory at program execution time. However, a
problem with this approach, is that we may not initially need
the entire program in memory.
An alternative strategy is to initially load pages only as they are
needed. This technique is known as demand paging and is
commonly used in virtual memory systems.
With demand-paged virtual memory, pages are only loaded
when they are demanded during program execution; pages that
are never accessed are thus never loaded into physical
memory.
A demand-paging system is similar to a paging system with
swapping where processes reside in secondary memory
(usually a disk).
When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory
Demand Paging
Lazy swapper –A lazy swapper never swaps a page into
memory unless that page will be needed.
Since we are now viewing a process as a sequence of
pages, rather than as one large contiguous address
space, use of the term swapper is technically incorrect.
A swapper manipulates entire processes, whereas a
pager is concerned with the individual pages of a process.
We thus use pager, rather than swapper, in connection
with demand paging.
Transfer of a Paged Memory to Contiguous Disk
Space
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(v in-memory, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
Example of a page table snapshot:
i
i
page table
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Graph of Page Faults Versus The Number of
Frames
First-In-First-Out (FIFO) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames (3 pages can be in memory at a time per
process)
1 1 4 5
2 2 1 3 9 page faults
3 3 2 4
4 frames
1 1 5 4
2 2 1 5 10 page faults
3 3 2
4 4 3
4 5
1 1 1 1 5
2 2 2 2 2
3 5 5 4 4
4 4 3 3 3
Counter implementation
Every page entry has a counter; every time page is
referenced through this entry, copy the clock into
the counter
When a page needs to be changed, look at the
counters to determine which are to change
LRU Page Replacement
LRU Algorithm (Cont.)
Stack implementation – keep a stack of page numbers
in a double link form:
Page referenced:
move it to the top
requires 6 pointers to be changed
No search for replacement
Use Of A Stack to Record The Most Recent Page
References
LRU Approximation Algorithms
Reference bit
With each page associate a bit, initially = 0
When page is referenced bit set to 1
Replace the one which is 0 (if one exists)
We do not know the order, however
Second chance
Need reference bit
Clock replacement
If page to be replaced (in clock order) has reference
bit = 1 then:
set reference bit 0
leave page in memory
replace next page (in clock order), subject to same
rules
Second-Chance (clock) Page-Replacement
Algorithm
Counting Algorithms
Keep a counter of the number of references that
have been made to each page