Lecture 06
Lecture 06
System
Professor Mangal Sain
Lecture 6
Virtual Memory
OBJECTIVES
Demand Paging
BACKGROUND
Code needs to be in memory to execute, but entire
program rarely used
Error code, unusual routines, large data structures
Entire program code not needed at same time
Consider ability to execute partially-loaded program
Program no longer constrained by limits of physical memory
Each program takes less memory while running -> more
programs run at the same time
Increased CPU utilization and throughput with no increase in
response time or turnaround time
Less I/O needed to load or swap programs into memory -> each
user program runs faster
BACKGROUND (CONT.)
4. Check that the page reference was legal and determine the location of the page
on the disk
5. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
2. Wait for the device seek and/or latency time
3. Begin the transfer of the page to a free frame
6. While waiting, allocate the CPU to some other user
8. Save the registers and process state for the other user
10. Correct the page table and other tables to show page is now in memory
12. Restore the user registers, process state, and new page table, and then resume
the interrupted instruction
PERFORMANCE OF DEMAND PAGING (CONT.)
Three major activities
Service the interrupt – careful coding means just several hundred
instructions needed
Read the page – lots of time
Restart the process – again just a small amount of time
Page Fault Rate 0 p 1
if p = 0 no page faults
if p = 1, every reference is a fault
Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )
DEMAND PAGING OPTIMIZATIONS
Swap space I/O faster than file system I/O even if on the same device
Swap allocated in larger chunks, less management needed than file
system
Copy entire process image to swap space at process load time
Then page in and out of swap space
Used in older BSD Unix
Demand page in from program binary on disk, but discard rather than
paging out when freeing frame
Used in Solaris and current BSD
Still need to write to swap space
Pages not associated with a file (like stack and heap) – anonymous
memory
Pages modified in memory but not yet written back to the file
system
Mobile systems
Typically don’t support swapping
Instead, demand page from file system and reclaim read-only pages
(such as code)
Lecture 6 – Part 2
on page fault
vfork() variation on fork() system call has parent suspend and
child using copy-on-write address space of parent
Designed to have child call exec()
Very efficient
BEFORE PROCESS 1 MODIFIES PAGE C
AFTER PROCESS 1 MODIFIES PAGE C
PAGE REPLACEMENT
3. Bring the desired page into the (newly) free frame; update
the page and frame tables
15 page faults
Many variations
GLOBAL VS. LOCAL ALLOCATION
Prepaging
To reduce the large number of page faults that occurs
at process startup
Prepage all or some of the pages a process will need,
before they are referenced
But if prepaged pages are unused, I/O and memory
was wasted
Assume s pages are prepaged and α of the pages is
used
Is cost of s * α save pages faults > or < than the cost of
prepaging
s * (1- α) unnecessary pages?
α near zero prepaging loses
OTHER ISSUES – PAGE SIZE
Sometimes OS designers have a choice
Especially if running on custom-built CPU
Page size selection must take into consideration:
Fragmentation
Page table size
Resolution
I/O overhead
Number of page faults
Locality
TLB size and effectiveness
Always power of 2, usually in the range 212 (4,096
bytes) to 222 (4,194,304 bytes)
On average, growing over time
OTHER ISSUES – TLB REACH