08-VM
08-VM
Operating System
Background
Code needs to be in memory to execute, but entire program
rarely used
Unusual routines, large data structures
Entire program code not needed at same time
Consider ability to execute partially-loaded program
Program no longer constrained by limits of physical memory
Each program takes less memory while running -> more
programs run at the same time
Increased CPU utilization and throughput with no
increase in response time or turnaround time
Less I/O needed to load or swap programs into memory ->
each user program runs faster
Operating System
2
1
Virtual memory
Virtual memory
Only part of the program needs to be in memory for
execution
Logical address space can therefore be much larger than
physical address space
Allows address spaces to be shared by several
processes
More programs running concurrently
Less I/O needed to load or swap processes
3
Operating System
Operating System
4
2
Demand Paging
Could bring entire process into memory
at load time
Or bring a page into memory only when
it is needed
Less I/O needed, no unnecessary
I/O
Less memory needed
Faster response
More users
Similar to paging system with swapping
(diagram on right)
Page is needed reference to it
invalid reference abort
not-in-memory bring to memory
Lazy swapper – never swaps a page
into memory unless page will be needed
Swapper that deals with pages is a
pager
5
Operating System
Basic Concepts
With swapping, pager guesses which pages will be used before
swapping out again
Instead, pager brings in only those pages into memory
How to determine that set of pages?
Need new MMU functionality to implement demand paging
If pages needed are already memory resident
No difference from non demand-paging
If page needed and not memory resident
Need to detect and load the page into memory from storage
Without changing program behavior
Without programmer needing to change code
Operating System
6
3
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(v in-memory – memory resident, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
Example of a page table snapshot:
7
Operating System
8
Operating System
4
Steps in Handling Page Fault
1. If there is a reference to a page, first reference to that page
will trap to operating system
Page fault
2. Operating system looks at another table to decide:
Invalid reference abort
Just not in memory
3. Find free frame
4. Swap page into frame via scheduled disk operation
5. Reset tables to indicate page now in memory
Set validation bit = v
6. Restart the instruction that caused the page fault
9
Operating System
10
Operating System
10
5
Aspects of Demand Paging
Extreme case – start process with no pages in memory
OS sets instruction pointer to first instruction of process,
non-memory-resident -> page fault
And for every other process pages on first access
Pure demand paging
Actually, a given instruction could access multiple pages ->
multiple page faults
Hardware support needed for demand paging
Page table with valid / invalid bit
Secondary memory (swap device with swap space)
Instruction restart
11
Operating System
11
Free-Frame List
When a page fault occurs, the operating system must bring
the desired page from secondary storage into main memory.
Most operating systems maintain a free-frame list -- a pool
of free frames for satisfying such requests.
12
Operating System
12
6
Stages in Demand Paging – Worse Case
13
Operating System
13
Operating System
14
14
7
Performance of Demand Paging
Page Fault Rate 0 p 1
if p = 0 no page faults
if p = 1, every reference is a fault
Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )
15
Operating System
15
Operating System
16
16
8
What Happens if There is no Free Frame?
Page replacement – find some page in memory, but not really in use,
page it out
Algorithm – terminate? swap out? replace the page?
Performance – want an algorithm which will result in minimum
number of page faults
17
Operating System
17
Page Replacement
Operating System
18
18
9
Need For Page Replacement
19
Operating System
19
Note now potentially 2 page transfers for page fault – increasing EAT
Operating System
20
20
10
Page Replacement
21
Operating System
21
Operating System
22
22
11
Graph of Page Faults Versus The Number of Frames
23
Operating System
23
FIFO algorithm
View frames allocated to the process as a circular buffer
When the buffer is full, the oldest page is replaced: first-in first-out
Simple implementation: just a pointer to cycle the frames
Reference string: 232152453252
3 frames (3 pages can be in memory at a time per process)
24
Operating System
24
12
FIFO Illustrating Belady’s Anomaly
25
Operating System
25
OPT algorithm
The page that will not be referenced for the longest time is replaced
26
Operating System
26
13
Least Recently Used (LRU) algorithm
27
Operating System
27
Operating System
28
28
14
Use Of A Stack to Record Most Recent Page References
29
Operating System
29
Operating System
30
30
15
second chance (CLOCK) algorithm
The frames for the process are treated as a circular buffer.
When a page is changed, the pointer will point to the next frame in the
buffer.
31
Operating System
31
32
Operating System
32
16
Enhanced Second-Chance Algorithm
33
Operating System
33
Thrashing
If a process does not have “enough” pages, the page-fault
rate is very high
Page fault to get page
Replace existing frame
But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to increase the
degree of multiprogramming
Another process added to the system
Operating System
34
34
17
Thrashing (Cont.)
Thrashing. A process is busy swapping pages in and out
35
Operating System
35
Locality model
Process migrates from one locality to another
Localities may overlap
Why does thrashing occur?
Operating System
36
36
18
Working-Set Model
working-set window a fixed number of page references
Example: 10,000 instructions
WSSi (working set of Process Pi) = total number of pages
referenced in the most recent (varies in time)
if too small will not encompass entire locality
if too large will encompass several localities
if = will encompass entire program
D = WSSi total demand frames
Approximation of locality
37
Operating System
37
38
Operating System
38
19
Page-Fault Frequency
More direct approach than WSS
Establish “acceptable” page-fault frequency (PFF) rate
and use local replacement policy
If actual rate too low, process loses frame
If actual rate too high, process gains frame
39
Operating System
39
Buddy System
Allocates memory from fixed-size segment consisting of physically-
contiguous pages
Memory allocated using power-of-2 allocator
Satisfies requests in units sized as power of 2
Request rounded up to next highest power of 2
When smaller allocation needed than is available, current chunk
split into two buddies of next-lower power of 2
Continue until appropriate sized chunk available
For example, assume 256KB chunk available, kernel requests 21KB
Split into AL and AR of 128KB each
One further divided into BL and BR of 64KB
– One further into CL and CR of 32KB each – one used to
satisfy request
Advantage – quickly coalesce unused chunks into larger chunk
Disadvantage - fragmentation
Operating System
40
40
20
Buddy System Allocator
41
Operating System
41
21