SlideShare a Scribd company logo
Operating Systems Principles Memory Management Lecture 8: Virtual Memory 主講人:虞台文
Content Principles of Virtual Memory  Implementations of Virtual Memory  Paging Segmentation Paging With Segmentation Paging of System Tables Translation Look-Aside Buffers Memory Allocation in Paged Systems Global Page Replacement Algorithms Local Page Replacement Algorithms Load Control and Thrashing Evaluation of Paging
Operating Systems Principles Memory Management Lecture 8: Virtual Memory Principles of Virtual Memory
The Virtual Memory Virtual memory is a technique that allows  processes  that may  not  be entirely  in the memory  to  execute  by means of  automatic storage allocation  upon request. The term virtual memory refers to the  abstraction  of separating  LOGICAL  memory    memory as seen by the  process     from  PHYSICAL  memory    memory as seen by the  processor . The  programmer  needs to be aware of only the  logical  memory space while the  operating system  maintains two or more levels of  physical  memory space.
Principles of Virtual Memory 00000000 00000000 FFFFFFFF FFFFFFFF 00000000 3FFFFFF 4 G 4 G Virtual Memory Virtual Memory Physical Memory 64 M . . . . . . . . .
Principles of Virtual Memory 00000000 00000000 FFFFFFFF FFFFFFFF 00000000 3FFFFFF 4 G 4 G Virtual Memory Virtual Memory Physical Memory 64 M Address Map Address Map . . . . . . . . .
Principles of Virtual Memory For each process, the system creates the  illusion  of large  contiguous memory space(s) Relevant portions of Virtual Memory (VM) are  loaded   automatically  and  transparently Address Map  translates  Virtual  Addresses to  Physical  Addresses 00000000 00000000 FFFFFFFF FFFFFFFF 00000000 3FFFFFF 4 G 4 G Virtual Memory Virtual Memory Physical Memory 64 M Address Map Address Map . . . . . . . . .
Approaches Single-segment  Virtual Memory:  One  area of  0… n  1  words Divided into  fix-size  pages Multiple-Segment  Virtual Memory:  Multiple  areas of up to  0… n  1  (words) Each holds a  logical  segment (e.g., function, data structure) Each is  contiguous   or  divided into  pages . . . . . . . . . . . .
Main Issues in VM Design Address mapping How  to  translate  virtual  addresses to physical addresses? Placement Where  to  place  a portion of VM needed by process? Replacement Which  portion of VM to  remove  when space is needed? Load control How many process could be activated  at any one time? Sharing How  can processes  share  portions of their VMs?
Operating Systems Principles Memory Management Lecture 8: Virtual Memory Implementations of Virtual Memory
Implementations of VM Paging Segmentation Paging With Segmentation Paging of System Tables Translation Look-aside Buffers
Paged Virtual Memory Page No. 0 1 2 p P  1 0 w 2 n  1 1 2 Page size 2 n Virtual Memory Offset Virtual Address ( p ,  w )
Physical Memory Frame No. 0 1 2 f F  1 0 w 2 n  1 1 2 Frame size 2 n Physical Memory Offset Physical Address ( f ,  w ) The size of  physical memory  is usually much  smaller  than the size of  virtual memory .
Virtual & Physical Addresses The size of  physical memory  is usually much  smaller  than the size of  virtual memory . va pa Memory Size Address Page number  p Offset  w Frame number  f Offset  w | p | bits | w | bits |  f   | bits | w | bits
Address Mapping Address Map Given  ( p ,  w ) , how to determine  f   from  p ? Page No. 0 1 2 p P  1 Virtual Memory Frame No. 0 1 2 f F  1 Physical Memory ( p ,  w ) ( f ,  w )
Address Mapping Address Map Each process  (pid =  id )  has its own virtual memory. Given  ( id ,  p ,  w ) , how to determine  f   from  ( id ,  p ) ? Page No. 0 1 2 p P  1 Virtual Memory Frame No. 0 1 2 f F  1 Physical Memory ( id ,  p ,  w ) ( f ,  w )
Frame Tables frame  f    1 frame  f frame  f  +1 Physical Memory ID p f pid page Each process  (pid =  id )  has its own virtual memory. Given  ( id ,  p ,  w ) , how to determine  f   from  ( id ,  p ) ? ID p w ID p Frame Table  FT 0 1 f F  1 w
Address Translation via Frame Table address_map(id,p,w){  pa  = UNDEFINED;  for( f =0;  f <F; f++) if( FT [ f ].pid==id && FT [ f ].page==p)   pa  =  f +w;  return  pa ; } Each process  (pid =  id )  has its own virtual memory. Given  ( id ,  p ,  w ) , how to determine  f   from  ( id ,  p ) ?
Disadvantages Inefficient : mapping must be performed for  every  memory  reference . Costly : Search must be done in parallel in hardware (i.e.,  associative  memory).  Sharing  of pages difficult  or not possible.
Associative Memories as Translation Look-Aside Buffers When  memory size  is  large ,  frame tables  tend to be quite large and  cannot  be kept in  associative memory  entirely. To alleviate this, associative memories are used as  translation look-aside buffers . To be detailed shortly.
Page Tables f Page Tables frame  f    1 frame  f frame  f  +1 PTR Page Table Register Physical Memory p w p w
Page Tables A page table keeps track of current locations of all pages belonging to a given process. PTR points at PT of the current process at run time by OS. Drawback: An  extra   memory accesses  needed for any  read/write  operations. Solution: Translation Look-Aside Buffer address_map(p, w) {  pa  = *( PTR +p)+w;  return  pa ; }
Demand Paging Pure Paging All  pages of VM are  loaded  initially Simple, but  maximum  size of VM  =  size of PM Demand Paging Pages are  loaded  as needed:  “on  demand ” Additional bit  in PT indicates a page’s  presence/absence  in memory “ Page fault ” occurs when page is absent
Demand Paging Pure Paging All pages of VM can be loaded initially Simple, but maximum size of VM = size of PM Demand Paging Pages a  loaded  as needed:  “on  demand ” Additional bit  in PT indicates a page’s  presence/absence  in memory “ Page fault ” occurs when page is  absent resident( m ) True :  the  m th  page is in memory. False : the  m th  page is missing. address_map(p, w) {  if (resident(*( PTR +p))) {  pa  = *( PTR +p)+w; return pa; }  else  page_fault ; }
Segmentation Multiple contiguous spaces (“segments”) More  natural  match to program/data structure Easier  sharing  (Chapter 9) va  = ( s ,  w )   mapped to  pa  (but no frames) Where / how  are segments placed in PM? Contiguous  versus  paged  application
Contiguous Allocation Per Segment Segment Tables Segment  x Segment  s Segment  y Physical Memory STR Segment Table Register s w s w
Contiguous Allocation Per Segment Each segment is  contiguous  in PM Segment Table (ST) tracks  starting locations STR  points to ST Address translation: Drawback: External  fragmentation   address_map(s, w) {  if (resident(*( STR +s))) {  pa  = *( STR +s)+w; return  pa ; }  else  segment_fault ; }
Contiguous Allocation Per Segment Each segment is  contiguous  in PM Segment Table (ST) tracks  starting locations STR  points to ST Address translation: Drawback: External  fragmentation   address_map(s, w) {  if (resident(*( STR +s))) {  pa  = *( STR +s)+w; return  pa ; }  else  segment_fault ; }
Paging with segmentation
Paging with segmentation Each segment is divided into  fix-size   pages va  = ( s ,  p ,  w ) | s |  determines  # of segments  (size of ST) | p |  determines  # of pages  per segment  (size of PT) | w |  determines  page size Address Translation: Drawback: 2   extra  memory references address_map(s, p, w) {  pa  = *(*( STR +s)+p)+w; return  pa ; }
Paging of System Tables
Paging of System Tables ST or PT may be  too   large  to keep in PM Divide  ST or PT into pages Keep track by  additional  page table Paging  of ST ST divided into  pages Segment  directory   keeps track of ST  pages Address Translation: Drawback: 3   extra  memory references. address_map(s1, s2, p, w) {  pa  = *(*(*( STR +s1)+s2)+p)+w; return  pa ; }
Translation Look-Aside Buffers (TLB) Advantage of VM Users view memory in  logical  sense. Disadvantage of VM Extra   memory accesses  needed. Solution T ranslation  L ook-aside  B uffers (TLB) A special high-speed memory Basic idea of TLB Keep the  most recent translation  of virtual to physical addresses  readily available  for possible future use. An  associative   memory  is employed as a buffer.
Translation Look-Aside Buffers (TLB)
Translation Look-Aside Buffers (TLB) When the search of  ( s ,  p )  fails, the  complete  address  translation  is needed. Replacement  strategy:  LRU .
Translation Look-Aside Buffers (TLB) The buffer is searched  associatively  (in  parallel )
Operating Systems Principles Memory Management Lecture 8: Virtual Memory Memory Allocation in Paged Systems
Memory Allocation with Paging Placement  policy   where  to allocate the memory on request? Any  free frame is OK (no external fragmentation) Keep track of  available space  is sufficient both for  statically  and  dynamically  allocated memory systems Replacement  policy     which   page(s)  to be replaced on page fault? Goal: Minimize the  number of page faults  and/or the  total number pages loaded .
Global/Local Replacement Policies Global  replacement:  Consider  all  resident pages (regardless of owner). Local  replacement Consider only pages of  faulting process , i.e., the  working set  of the process.
Criteria for Performance Comparison Tool:  Reference String  (RS) r 0   r 1   ... r t  ... r T r t   : the page number referenced at time   t Criteria: The number of  page faults The total number of  pages loaded
Criteria for Performance Comparison Tool:  Reference String  (RS) r 0   r 1   ... r t  ... r T r t   : the page number referenced at time   t Criteria: The number of  page faults The total number of  pages loaded Equivalent To be used in the following discussions. =1
Global Page Replacement Algorithms Optimal Replacement Algorithm (MIN) Accurate  prediction  needed Unrealizable Random  Replacement Algorithm Playing  roulette  wheel FIFO  Replacement Algorithm Simple  and  efficient Suffer from  Belady’s  anormaly Least Recently Used Algorithm ( LRU ) Doesn’t  suffer from  Belady’s  anormaly, but high  overhead Second-Chance  Replacement Algorithm Economical  version of  LRU Third-Chance  Replacement Algorithm Economical  version of  LRU Considering  dirty  pages
Optimal Replacement Algorithm (MIN) Replace page that will  not  be  referenced  for the longest time in the  future . Problem:  Reference String  not   known  in advance.
Example: Optimal Replacement Algorithm (MIN) RS  =  c a d b e b a b c d Replace page that will  not  be  referenced  for the longest time in the  future . c a d b e b a b c d Problem: Reference String  not   known  in advance. 2   page faults Time t RS Frame  0 Frame  1 Frame  2 Frame  3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a b c d a b c d a b c d a b c d a b c d a b c e d b c e e d a b c e a b c e a b c e a b c e d a
Random Replacement Program  reference string  are  never  know in advance. Without  any  prior  knowledge,  random  replacement strategy can be applied. Is there  any  prior knowledge  available  for common programs? Yes , the  locality of reference . Random replacement is  simple  but without considering such a property.
The Principle of Locality Most  instructions  are  sequential Except  for  branch  instruction Most  loops  are short for -loop while -loop Many  data  structures are accessed  sequentially Arrays files  of records
FIFO Replacement Algorithm FIFO: Replace  oldest  page Assume that pages residing the  longest  in memory  are  least  likely to be referenced  in the future. Easy but may exhibit  Belady’s anormaly , i.e., Increasing  the available  memory  can result in  more   page faults .
Example: FIFO Replacement Algorithm RS  =  c a d b e b a b c d c a d b e b a b c d Replace  oldest  page.            5   page faults Time t RS Frame  0 Frame  1 Frame  2 Frame  3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a b c d a b c d a b c d a b c d a b c d e b c d d a b c e a e b c d e a c d e a b d e a b c d e a b b c c d
Example: Belady’s Anormaly RS  =  dcbadcedcbaecdcbae 17   page faults 14   page faults Time t RS Frame  0 Frame  1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 d c b a d c e d c b a e c d c b a e d d c b c b a d a d c e c e d c d c b a b e b c b c d c d b d b a e a Time t RS Frame  0 Frame  1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 Frame  2 d c b a d c e d c b a e c d c b a e d d c d c a c a d b b b a d c e d c e d c e d c e b c e b a e b a e c a e d a c d a c b a c b a c e a
Example: Belady’s Anormaly 14   page faults 15   page faults Time t RS Frame  0 Frame  1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 Frame  2 d c b a d c e d c b a e c d c b a e d d c d c a c a d b b b a d c e d c e d c e d c e b c e b a e b a e c a e d a c d a c b a c b a c e a d c b a d c e d c b a e c d c b a e d d c d c d c b b a d c b a d c b a e c b a e d b a e d c a e d c b a d c b a e c b a e c b a e d b a e d c b e d c b a d c b a e c Time t RS Frame  0 Frame  1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 Frame  2 Frame  3
Least Recently Used Replacement (LRU) Replace  Least Recently Used  page Remove the page which has  not  been  referenced  for the  longest  time. Comply with the  principle of locality Doesn’t  suffer from  Belady’s anormaly
Example: Least Recently Used Replacement (LRU) RS  =  c a d b e b a b c d Replace  Least Recently Used   page. 3   page faults c a d b e b a b c d Time t RS Frame  0 Frame  1 Frame  2 Frame  3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 Queue end Queue head a b c d a b c d a b c d a b c d a b c d a b e d a b d c e c a b e d a b e d a b e d a b e c d e c d d c b a c d b a a c d b d a c b b d a c e b d a b e d a a b e d b a e d c b a e d c b a
LRU Implementation Software  queue: too  expensive Time-stamping Stamp each  referenced  page with  current time Replace page with  oldest  stamp Hardware  capacitor  with each frame Charge  at reference Replace page with  smallest  charge n -bit  aging  register with each frame Set  left-most  bit of referenced page to 1 Shift  all registers to  right  at  every   reference  or  periodically Replace page with  smallest  value R  =  R n  1  R n  2 …  R 1  R 0
Second-Chance Algorithm Approximates  LRU Implement  use-bit   u  with each frame Set  u =1  when  page referenced To select a page: If  u ==0 ,  select  page Else, set  u =0  and consider  next  frame Used  page gets a  second chance  to  stay in PM Also called  clock  algorithm since search  cycles  through  page frames .
Example: Second-Chance Algorithm To select a page: If  u ==0 ,  select  page Else, set  u =0  and consider  next  frame cycle  through page frames RS  =  c a d b e b a b c d 4   page faults c a d b e b a b c d            Time t RS Frame  0 Frame  1 Frame  2 Frame  3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 e /1 b /0 c /0 d /0 e a e /1 b /1 c /0 d /0 e /1 b /0 a /1 d /0 a c e /1 b /1 a /1 d /0 e /1 b /1 a /1 c /1 c d d /1 b /0 a /0 c /0 d e
Third-Chance Algorithm Second chance algorithm does  not  distinguish  between  read  and  write  access Write  access more  expensive Give modified pages a  third  chance: u -bit   set  at every reference ( read  and  write ) w -bit   set  at  write  reference to  select  a page, cycle through frames, resetting bits, until  uw ==00 : uw      uw 1 1  0 1 1 0  0 0 0 1  0 0  *   (remember modification) 0 0  select Can be implemented by an additional bit.
Example: Third-Chance Algorithm RS  =  c a w  d b w  e b a w  b c d 3   page faults uw      uw 1 1  0 1 1 0  0 0 0 1  0 0  *   0 0  select c a w d b w e b a w b c d            Time t RS Frame  0 Frame  1 Frame  2 Frame  3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a /00 * d /10 e /00 c /00 a /10 b /10 c /10 d /10 a /10 b /10 c /10 d /10 a /11 b /10 c /10 d /10 a /11 b /10 c /10 d /10 a /11 b /11 c /10 d /10 a /00 * b /00 * e /10 d /00 e c a /00 * b /10 * e /10 d /00 a /11 b /10 * e /10 d /00 a /11 b /10 * e /10 d /00 a /11 b /10 * e /10 c /10 c d d b
Local Page Replacement Algorithms Measurements  indicate that Every program needs a “ minimum ” set of pages If too  few ,  thrashing  occurs If too  many , page frames are  wasted The “ minimum ”  varies  over time How  to  determine  and  implement  this “ minimum ”? Depending only on the behavior of the process itself.
Local Page Replacement Algorithms Optimal  Page Replacement Algorithm (VMIN) The  Working Set  Model (WS) Page Fault Frequency  Replacement Algorithm (PFF)
Optimal Page Replacement Algorithm (VMIN) The method Define a sliding window  ( t ,  t  +  )      width      + 1    is a parameter (a system constant) At any time  t , maintain as  resident   all pages visible in window Guaranteed to generate  smallest  number of  page faults   corresponding to a given window width.
Example: Optimal Page Replacement Algorithm (VMIN) RS  =  c c d b c e c e a d 5   page faults    = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b d b c e e a a d d
Example: Optimal Page Replacement Algorithm (VMIN) RS  =  c c d b c e c e a d 5   page faults    = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b d b c e e a a d d
Example: Optimal Page Replacement Algorithm (VMIN) By  increasing    , the number of  page faults  can  arbitrarily  be  reduced , of course at the expense of using more page frames. VMIN is  unrealizable  since the reference string is unavailable. RS  =  c c d b c e c e a d 5   page faults    = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b d b c e e a a d d
Working Set Model Use  trailing  window (instead of future window) Working set  W ( t ,   )  is  all  pages referenced during the interval  ( t  –   ,  t )   (instead of  ( t ,  t  +   )  ) At time  t : Remove all pages not in  W ( t ,   )   Process may run only if entire  W ( t ,   )  is resident
Example: Working Set Model RS  =  e d a   c c d b c e c e a d 5   page faults    = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a b e a d a e d
Example: Working Set Model RS  =  e d a   c c d b c e c e a d 5   page faults    = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a b e a d a e d
Approximate Working Set Model Drawback:  costly  to implement Approximations: 1. Each page frame with an  aging  register Set  left-most  bit to  1  whenever  referenced Periodically   shift right  all aging registers Remove pages  which reach  zero  from working set. 2. Each page frame with a  use bit  and a  time stamp Use bit is  turned on  by hardware whenever  referenced Periodically  do following for each page frame: Turn off  use-bit if it is on and set  current time  as its time stamp Compute turn-off time  t off   of the page frame Remove  the page from the working set if  t off   >   t max
Page Fault Frequency (PFF) Replacement Main objective: Keep  page fault rate   low Basic principle: If the  time  btw the  current   ( t c )  and the  previous   ( t c  1 )   page faults   exceeds  a critical value   , all pages  not   referenced  during that time interval are  removed  from memory. The algorithm of PFF: If time between page faults        grow  resident set by  adding  new page to resident set If time between page faults  >     shrink  resident set by adding new page and  removing  all pages not referenced since last page fault
Example: Page Fault Frequency (PFF) Replacement RS  =  c c d b c e c e a d 5   page faults    = 2 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a ,  e e a d a e b ,  d
Example: Page Fault Frequency (PFF) Replacement RS  =  c c d b c e c e a d 5   page faults    = 2 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a ,  e e a d a e b ,  d
Load Control and Thrashing Main issues: How to  choose  the  degree  of  multiprogramming ? Decrease ?  Increase ? When level  decreased , which  process should be  deactivated ? When a process  created  or a suspended one  reactivated , which  of its  pages  should be  loaded ? One  or  many ? Load Control Policy to set the  number  &  type  of  concurrent  processes Thrashing Most  system’s  effort  pays on  moving   pages  between main and secondary memory, i.e.,  low   CPU utilization .
Load control   Choosing Degree of Multiprogramming Local  replacement: Each process has a  well-defined   resident set , e.g., Working set model & PFF replacement This  automatically  imposes a  limit , i.e., up to the point where total memory is allocated. Global  replacement No  working set concept Use  CPU utilization  as a criterion With  too many processes ,  thrashing  occurs
Load control   Choosing Degree of Multiprogramming Local  replacement: Each process has a  well-defined   resident set , e.g., Working set model & PFF replacement This  automatically  imposes a  limit , i.e., up to the point where total memory is allocated. Global  replacement No  working set concept Use  CPU utilization  as a criterion With  too many processes ,  thrashing  occurs L  =  mean time between faults S  =  mean page fault service time
Load control   Choosing Degree of Multiprogramming Local  replacement: Each process has a  well-defined   resident set , e.g., Working set model & PFF replacement This  automatically  imposes a  limit , i.e., up to the point where total memory is allocated. Global  replacement No  working set concept Use  CPU utilization  as a criterion With  too many processes ,  thrashing  occurs L  =  mean time between faults S  =  mean page fault service time How  to determine the optimum, i.e.,   N max Thrashing
Load control   Choosing Degree of Multiprogramming L = S   criterion: Page fault service  S  needs to  keep up with  mean time between faults  L . 50%  criterion:  CPU utilization  is  highest  when paging disk  50%   busy  (found experimentally). Clock  load control Scan  the list of page frames to  find  replaced  page . If the  pointer advance rate  is too low,  increase  multiprogramming level. How  to determine  N max  ?
Load control   Choosing the Process to Deactivate Lowest priority  process Consistent with  scheduling  policy Faulting  process Eliminate the process that would be  blocked Last  process  activated Most recently activated  process is considered  least   important . Smallest  process Least   expensive  to swap in and out Largest  process Free  up the  largest  number of frames
Load control    Prepaging  Which pages to load when process activated Prepage  last   resident set
Evaluation of Paging Advantages of paging Simple   placement  strategy No   external  fragmentation Parameters  affecting the  dynamic behavior  of paged systems Page   size Available memory
Evaluation of Paging
Evaluation of Paging A process requires a certain percentages of its pages within the  short time period after activation . Prepaging is important.
Evaluation of Paging Smaller page size is beneficial. Another  advantage  with a small page size: Reduce  memory waste due to  internal  fragmentation . However,  small   pages  require  lager page tables .
Evaluation of Paging W : Minimum amount of memory to avoid  thrashing . Load control is important.
Ad

More Related Content

What's hot (11)

Ui disk & terminal drivers
Ui disk & terminal driversUi disk & terminal drivers
Ui disk & terminal drivers
Sarang Ananda Rao
 
The reasons why 64-bit programs require more stack memory
The reasons why 64-bit programs require more stack memoryThe reasons why 64-bit programs require more stack memory
The reasons why 64-bit programs require more stack memory
PVS-Studio
 
Large Scale Data Analysis with Map/Reduce, part I
Large Scale Data Analysis with Map/Reduce, part ILarge Scale Data Analysis with Map/Reduce, part I
Large Scale Data Analysis with Map/Reduce, part I
Marin Dimitrov
 
computer notes - Data Structures - 13
computer notes - Data Structures - 13computer notes - Data Structures - 13
computer notes - Data Structures - 13
ecomputernotes
 
T5 2017 database_searching_v_upload
T5 2017 database_searching_v_uploadT5 2017 database_searching_v_upload
T5 2017 database_searching_v_upload
Prof. Wim Van Criekinge
 
Inside database
Inside databaseInside database
Inside database
Takashi Hoshino
 
Chapter 04
Chapter 04Chapter 04
Chapter 04
Google
 
Computer architecture cache memory
Computer architecture cache memoryComputer architecture cache memory
Computer architecture cache memory
Mazin Alwaaly
 
Thrashing allocation frames.43
Thrashing allocation frames.43Thrashing allocation frames.43
Thrashing allocation frames.43
myrajendra
 
Hive query optimization infinity
Hive query optimization infinityHive query optimization infinity
Hive query optimization infinity
Shashwat Shriparv
 
cache memory
cache memorycache memory
cache memory
Army Public School and College -Faisal
 
The reasons why 64-bit programs require more stack memory
The reasons why 64-bit programs require more stack memoryThe reasons why 64-bit programs require more stack memory
The reasons why 64-bit programs require more stack memory
PVS-Studio
 
Large Scale Data Analysis with Map/Reduce, part I
Large Scale Data Analysis with Map/Reduce, part ILarge Scale Data Analysis with Map/Reduce, part I
Large Scale Data Analysis with Map/Reduce, part I
Marin Dimitrov
 
computer notes - Data Structures - 13
computer notes - Data Structures - 13computer notes - Data Structures - 13
computer notes - Data Structures - 13
ecomputernotes
 
Chapter 04
Chapter 04Chapter 04
Chapter 04
Google
 
Computer architecture cache memory
Computer architecture cache memoryComputer architecture cache memory
Computer architecture cache memory
Mazin Alwaaly
 
Thrashing allocation frames.43
Thrashing allocation frames.43Thrashing allocation frames.43
Thrashing allocation frames.43
myrajendra
 
Hive query optimization infinity
Hive query optimization infinityHive query optimization infinity
Hive query optimization infinity
Shashwat Shriparv
 

Viewers also liked (20)

DSW: The first 20 Years
DSW: The first 20 YearsDSW: The first 20 Years
DSW: The first 20 Years
Matthias Stockkamp
 
Os9 2
Os9 2Os9 2
Os9 2
issbp
 
Historia de la fotografia
Historia de la fotografiaHistoria de la fotografia
Historia de la fotografia
memb90
 
Hoja de vida - Milton mayorca
Hoja de vida - Milton mayorcaHoja de vida - Milton mayorca
Hoja de vida - Milton mayorca
memb90
 
Os4
Os4Os4
Os4
issbp
 
Os6 2
Os6 2Os6 2
Os6 2
issbp
 
Pizza Hut: It's Your Hut! Campaign
Pizza Hut: It's Your Hut! CampaignPizza Hut: It's Your Hut! Campaign
Pizza Hut: It's Your Hut! Campaign
zikakiss
 
Registro de telefono
Registro de telefonoRegistro de telefono
Registro de telefono
Jhojan Ramos Vera
 
Os6
Os6Os6
Os6
issbp
 
Milton mayorca
Milton mayorcaMilton mayorca
Milton mayorca
memb90
 
Etnografía guber rosana
Etnografía   guber rosanaEtnografía   guber rosana
Etnografía guber rosana
Climer
 
Os2
Os2Os2
Os2
issbp
 
Heidegger el concepto de tiempo
Heidegger   el concepto de tiempoHeidegger   el concepto de tiempo
Heidegger el concepto de tiempo
Climer
 
Filosofía de la ciencia leon olives y ana r. pérez 453
Filosofía de la ciencia leon olives y ana r. pérez 453Filosofía de la ciencia leon olives y ana r. pérez 453
Filosofía de la ciencia leon olives y ana r. pérez 453
Climer
 
Class5
 Class5 Class5
Class5
issbp
 
Os2 2
Os2 2Os2 2
Os2 2
issbp
 
El paradigma perdido edgar morin
El paradigma perdido edgar morinEl paradigma perdido edgar morin
El paradigma perdido edgar morin
Climer
 
Ch11 input output systems
Ch11 input output systemsCh11 input output systems
Ch11 input output systems
issbp
 
L'ergonomie d'un site web par Fred Colantonio
L'ergonomie d'un site web par Fred ColantonioL'ergonomie d'un site web par Fred Colantonio
L'ergonomie d'un site web par Fred Colantonio
J'ai besoin de com
 
"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat
"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat
"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat
J'ai besoin de com
 
Os9 2
Os9 2Os9 2
Os9 2
issbp
 
Historia de la fotografia
Historia de la fotografiaHistoria de la fotografia
Historia de la fotografia
memb90
 
Hoja de vida - Milton mayorca
Hoja de vida - Milton mayorcaHoja de vida - Milton mayorca
Hoja de vida - Milton mayorca
memb90
 
Os6 2
Os6 2Os6 2
Os6 2
issbp
 
Pizza Hut: It's Your Hut! Campaign
Pizza Hut: It's Your Hut! CampaignPizza Hut: It's Your Hut! Campaign
Pizza Hut: It's Your Hut! Campaign
zikakiss
 
Milton mayorca
Milton mayorcaMilton mayorca
Milton mayorca
memb90
 
Etnografía guber rosana
Etnografía   guber rosanaEtnografía   guber rosana
Etnografía guber rosana
Climer
 
Heidegger el concepto de tiempo
Heidegger   el concepto de tiempoHeidegger   el concepto de tiempo
Heidegger el concepto de tiempo
Climer
 
Filosofía de la ciencia leon olives y ana r. pérez 453
Filosofía de la ciencia leon olives y ana r. pérez 453Filosofía de la ciencia leon olives y ana r. pérez 453
Filosofía de la ciencia leon olives y ana r. pérez 453
Climer
 
Class5
 Class5 Class5
Class5
issbp
 
Os2 2
Os2 2Os2 2
Os2 2
issbp
 
El paradigma perdido edgar morin
El paradigma perdido edgar morinEl paradigma perdido edgar morin
El paradigma perdido edgar morin
Climer
 
Ch11 input output systems
Ch11 input output systemsCh11 input output systems
Ch11 input output systems
issbp
 
L'ergonomie d'un site web par Fred Colantonio
L'ergonomie d'un site web par Fred ColantonioL'ergonomie d'un site web par Fred Colantonio
L'ergonomie d'un site web par Fred Colantonio
J'ai besoin de com
 
"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat
"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat
"Créer une campagne Google Adwords étape par étape" par Christophe Bouchat
J'ai besoin de com
 
Ad

Similar to Os8 2 (20)

15 bufferand records
15 bufferand records15 bufferand records
15 bufferand records
ashish61_scs
 
Memory Management
Memory ManagementMemory Management
Memory Management
Laguna State Polytechnic University
 
Unit 5Memory management.pptx
Unit 5Memory management.pptxUnit 5Memory management.pptx
Unit 5Memory management.pptx
SourabhRaj29
 
Memory management
Memory managementMemory management
Memory management
VELAMMAL INSTITUTE OF TECHNOLOGY
 
VIRTUAL MEMORY
VIRTUAL MEMORYVIRTUAL MEMORY
VIRTUAL MEMORY
Kamran Ashraf
 
Memory+management
Memory+managementMemory+management
Memory+management
Kushagra Gaur
 
basics of virtual memory
basics of virtual memorybasics of virtual memory
basics of virtual memory
Adarsh Patel
 
Memory management
Memory managementMemory management
Memory management
CHANDERPRABHU JAIN COLLEGE OF HIGHER STUDIES & SCHOOL OF LAW
 
Chapter 8 - Main Memory
Chapter 8 - Main MemoryChapter 8 - Main Memory
Chapter 8 - Main Memory
Wayne Jones Jnr
 
7989-lect 10.pdf
7989-lect 10.pdf7989-lect 10.pdf
7989-lect 10.pdf
RiazAhmad521284
 
Memory management
Memory managementMemory management
Memory management
Mohammad Sadiq
 
Memory Management
Memory ManagementMemory Management
Memory Management
Ramasubbu .P
 
Memory Mapping Implementation (mmap) in Linux Kernel
Memory Mapping Implementation (mmap) in Linux KernelMemory Mapping Implementation (mmap) in Linux Kernel
Memory Mapping Implementation (mmap) in Linux Kernel
Adrian Huang
 
Memory Managment(OS).pptx
Memory Managment(OS).pptxMemory Managment(OS).pptx
Memory Managment(OS).pptx
RohitPaul71
 
Ch9 OS
Ch9 OSCh9 OS
Ch9 OS
C.U
 
OS_Ch9
OS_Ch9OS_Ch9
OS_Ch9
Supriya Shrivastava
 
OSCh9
OSCh9OSCh9
OSCh9
Joe Christensen
 
Virtual Memory ,Direct memory addressing and indirect memory addressing prese...
Virtual Memory ,Direct memory addressing and indirect memory addressing prese...Virtual Memory ,Direct memory addressing and indirect memory addressing prese...
Virtual Memory ,Direct memory addressing and indirect memory addressing prese...
ITM University
 
Linux Memory
Linux MemoryLinux Memory
Linux Memory
Vitaly Nahshunov
 
Unit 5
Unit 5Unit 5
Unit 5
pm_ghate
 
Ad

More from issbp (19)

Os10 2
Os10 2Os10 2
Os10 2
issbp
 
Os10
Os10Os10
Os10
issbp
 
Os9
Os9Os9
Os9
issbp
 
Os7 2
Os7 2Os7 2
Os7 2
issbp
 
Os7
Os7Os7
Os7
issbp
 
Os5 2
Os5 2Os5 2
Os5 2
issbp
 
Os5
Os5Os5
Os5
issbp
 
Os4 2
Os4 2Os4 2
Os4 2
issbp
 
Os3 2
Os3 2Os3 2
Os3 2
issbp
 
Os3
Os3Os3
Os3
issbp
 
Class9
 Class9 Class9
Class9
issbp
 
Class8
 Class8 Class8
Class8
issbp
 
Class7
 Class7 Class7
Class7
issbp
 
Class6
 Class6 Class6
Class6
issbp
 
Class4
 Class4 Class4
Class4
issbp
 
Class3
 Class3 Class3
Class3
issbp
 
Class2
 Class2 Class2
Class2
issbp
 
Class1
 Class1 Class1
Class1
issbp
 
0227 regularlanguages
 0227 regularlanguages 0227 regularlanguages
0227 regularlanguages
issbp
 
Os10 2
Os10 2Os10 2
Os10 2
issbp
 
Os10
Os10Os10
Os10
issbp
 
Os7 2
Os7 2Os7 2
Os7 2
issbp
 
Os5 2
Os5 2Os5 2
Os5 2
issbp
 
Os4 2
Os4 2Os4 2
Os4 2
issbp
 
Os3 2
Os3 2Os3 2
Os3 2
issbp
 
Class9
 Class9 Class9
Class9
issbp
 
Class8
 Class8 Class8
Class8
issbp
 
Class7
 Class7 Class7
Class7
issbp
 
Class6
 Class6 Class6
Class6
issbp
 
Class4
 Class4 Class4
Class4
issbp
 
Class3
 Class3 Class3
Class3
issbp
 
Class2
 Class2 Class2
Class2
issbp
 
Class1
 Class1 Class1
Class1
issbp
 
0227 regularlanguages
 0227 regularlanguages 0227 regularlanguages
0227 regularlanguages
issbp
 

Os8 2

  • 1. Operating Systems Principles Memory Management Lecture 8: Virtual Memory 主講人:虞台文
  • 2. Content Principles of Virtual Memory Implementations of Virtual Memory Paging Segmentation Paging With Segmentation Paging of System Tables Translation Look-Aside Buffers Memory Allocation in Paged Systems Global Page Replacement Algorithms Local Page Replacement Algorithms Load Control and Thrashing Evaluation of Paging
  • 3. Operating Systems Principles Memory Management Lecture 8: Virtual Memory Principles of Virtual Memory
  • 4. The Virtual Memory Virtual memory is a technique that allows processes that may not be entirely in the memory to execute by means of automatic storage allocation upon request. The term virtual memory refers to the abstraction of separating LOGICAL memory  memory as seen by the process  from PHYSICAL memory  memory as seen by the processor . The programmer needs to be aware of only the logical memory space while the operating system maintains two or more levels of physical memory space.
  • 5. Principles of Virtual Memory 00000000 00000000 FFFFFFFF FFFFFFFF 00000000 3FFFFFF 4 G 4 G Virtual Memory Virtual Memory Physical Memory 64 M . . . . . . . . .
  • 6. Principles of Virtual Memory 00000000 00000000 FFFFFFFF FFFFFFFF 00000000 3FFFFFF 4 G 4 G Virtual Memory Virtual Memory Physical Memory 64 M Address Map Address Map . . . . . . . . .
  • 7. Principles of Virtual Memory For each process, the system creates the illusion of large contiguous memory space(s) Relevant portions of Virtual Memory (VM) are loaded automatically and transparently Address Map translates Virtual Addresses to Physical Addresses 00000000 00000000 FFFFFFFF FFFFFFFF 00000000 3FFFFFF 4 G 4 G Virtual Memory Virtual Memory Physical Memory 64 M Address Map Address Map . . . . . . . . .
  • 8. Approaches Single-segment Virtual Memory: One area of 0… n  1 words Divided into fix-size pages Multiple-Segment Virtual Memory: Multiple areas of up to 0… n  1 (words) Each holds a logical segment (e.g., function, data structure) Each is contiguous or divided into pages . . . . . . . . . . . .
  • 9. Main Issues in VM Design Address mapping How to translate virtual addresses to physical addresses? Placement Where to place a portion of VM needed by process? Replacement Which portion of VM to remove when space is needed? Load control How many process could be activated at any one time? Sharing How can processes share portions of their VMs?
  • 10. Operating Systems Principles Memory Management Lecture 8: Virtual Memory Implementations of Virtual Memory
  • 11. Implementations of VM Paging Segmentation Paging With Segmentation Paging of System Tables Translation Look-aside Buffers
  • 12. Paged Virtual Memory Page No. 0 1 2 p P  1 0 w 2 n  1 1 2 Page size 2 n Virtual Memory Offset Virtual Address ( p , w )
  • 13. Physical Memory Frame No. 0 1 2 f F  1 0 w 2 n  1 1 2 Frame size 2 n Physical Memory Offset Physical Address ( f , w ) The size of physical memory is usually much smaller than the size of virtual memory .
  • 14. Virtual & Physical Addresses The size of physical memory is usually much smaller than the size of virtual memory . va pa Memory Size Address Page number p Offset w Frame number f Offset w | p | bits | w | bits | f | bits | w | bits
  • 15. Address Mapping Address Map Given ( p , w ) , how to determine f from p ? Page No. 0 1 2 p P  1 Virtual Memory Frame No. 0 1 2 f F  1 Physical Memory ( p , w ) ( f , w )
  • 16. Address Mapping Address Map Each process (pid = id ) has its own virtual memory. Given ( id , p , w ) , how to determine f from ( id , p ) ? Page No. 0 1 2 p P  1 Virtual Memory Frame No. 0 1 2 f F  1 Physical Memory ( id , p , w ) ( f , w )
  • 17. Frame Tables frame f  1 frame f frame f +1 Physical Memory ID p f pid page Each process (pid = id ) has its own virtual memory. Given ( id , p , w ) , how to determine f from ( id , p ) ? ID p w ID p Frame Table FT 0 1 f F  1 w
  • 18. Address Translation via Frame Table address_map(id,p,w){ pa = UNDEFINED; for( f =0; f <F; f++) if( FT [ f ].pid==id && FT [ f ].page==p) pa = f +w; return pa ; } Each process (pid = id ) has its own virtual memory. Given ( id , p , w ) , how to determine f from ( id , p ) ?
  • 19. Disadvantages Inefficient : mapping must be performed for every memory reference . Costly : Search must be done in parallel in hardware (i.e., associative memory). Sharing of pages difficult or not possible.
  • 20. Associative Memories as Translation Look-Aside Buffers When memory size is large , frame tables tend to be quite large and cannot be kept in associative memory entirely. To alleviate this, associative memories are used as translation look-aside buffers . To be detailed shortly.
  • 21. Page Tables f Page Tables frame f  1 frame f frame f +1 PTR Page Table Register Physical Memory p w p w
  • 22. Page Tables A page table keeps track of current locations of all pages belonging to a given process. PTR points at PT of the current process at run time by OS. Drawback: An extra memory accesses needed for any read/write operations. Solution: Translation Look-Aside Buffer address_map(p, w) { pa = *( PTR +p)+w; return pa ; }
  • 23. Demand Paging Pure Paging All pages of VM are loaded initially Simple, but maximum size of VM = size of PM Demand Paging Pages are loaded as needed: “on demand ” Additional bit in PT indicates a page’s presence/absence in memory “ Page fault ” occurs when page is absent
  • 24. Demand Paging Pure Paging All pages of VM can be loaded initially Simple, but maximum size of VM = size of PM Demand Paging Pages a loaded as needed: “on demand ” Additional bit in PT indicates a page’s presence/absence in memory “ Page fault ” occurs when page is absent resident( m ) True : the m th page is in memory. False : the m th page is missing. address_map(p, w) { if (resident(*( PTR +p))) { pa = *( PTR +p)+w; return pa; } else page_fault ; }
  • 25. Segmentation Multiple contiguous spaces (“segments”) More natural match to program/data structure Easier sharing (Chapter 9) va = ( s , w ) mapped to pa (but no frames) Where / how are segments placed in PM? Contiguous versus paged application
  • 26. Contiguous Allocation Per Segment Segment Tables Segment x Segment s Segment y Physical Memory STR Segment Table Register s w s w
  • 27. Contiguous Allocation Per Segment Each segment is contiguous in PM Segment Table (ST) tracks starting locations STR points to ST Address translation: Drawback: External fragmentation address_map(s, w) { if (resident(*( STR +s))) { pa = *( STR +s)+w; return pa ; } else segment_fault ; }
  • 28. Contiguous Allocation Per Segment Each segment is contiguous in PM Segment Table (ST) tracks starting locations STR points to ST Address translation: Drawback: External fragmentation address_map(s, w) { if (resident(*( STR +s))) { pa = *( STR +s)+w; return pa ; } else segment_fault ; }
  • 30. Paging with segmentation Each segment is divided into fix-size pages va = ( s , p , w ) | s | determines # of segments (size of ST) | p | determines # of pages per segment (size of PT) | w | determines page size Address Translation: Drawback: 2 extra memory references address_map(s, p, w) { pa = *(*( STR +s)+p)+w; return pa ; }
  • 32. Paging of System Tables ST or PT may be too large to keep in PM Divide ST or PT into pages Keep track by additional page table Paging of ST ST divided into pages Segment directory keeps track of ST pages Address Translation: Drawback: 3 extra memory references. address_map(s1, s2, p, w) { pa = *(*(*( STR +s1)+s2)+p)+w; return pa ; }
  • 33. Translation Look-Aside Buffers (TLB) Advantage of VM Users view memory in logical sense. Disadvantage of VM Extra memory accesses needed. Solution T ranslation L ook-aside B uffers (TLB) A special high-speed memory Basic idea of TLB Keep the most recent translation of virtual to physical addresses readily available for possible future use. An associative memory is employed as a buffer.
  • 35. Translation Look-Aside Buffers (TLB) When the search of ( s , p ) fails, the complete address translation is needed. Replacement strategy: LRU .
  • 36. Translation Look-Aside Buffers (TLB) The buffer is searched associatively (in parallel )
  • 37. Operating Systems Principles Memory Management Lecture 8: Virtual Memory Memory Allocation in Paged Systems
  • 38. Memory Allocation with Paging Placement policy  where to allocate the memory on request? Any free frame is OK (no external fragmentation) Keep track of available space is sufficient both for statically and dynamically allocated memory systems Replacement policy  which page(s) to be replaced on page fault? Goal: Minimize the number of page faults and/or the total number pages loaded .
  • 39. Global/Local Replacement Policies Global replacement: Consider all resident pages (regardless of owner). Local replacement Consider only pages of faulting process , i.e., the working set of the process.
  • 40. Criteria for Performance Comparison Tool: Reference String (RS) r 0 r 1 ... r t ... r T r t : the page number referenced at time t Criteria: The number of page faults The total number of pages loaded
  • 41. Criteria for Performance Comparison Tool: Reference String (RS) r 0 r 1 ... r t ... r T r t : the page number referenced at time t Criteria: The number of page faults The total number of pages loaded Equivalent To be used in the following discussions. =1
  • 42. Global Page Replacement Algorithms Optimal Replacement Algorithm (MIN) Accurate prediction needed Unrealizable Random Replacement Algorithm Playing roulette wheel FIFO Replacement Algorithm Simple and efficient Suffer from Belady’s anormaly Least Recently Used Algorithm ( LRU ) Doesn’t suffer from Belady’s anormaly, but high overhead Second-Chance Replacement Algorithm Economical version of LRU Third-Chance Replacement Algorithm Economical version of LRU Considering dirty pages
  • 43. Optimal Replacement Algorithm (MIN) Replace page that will not be referenced for the longest time in the future . Problem: Reference String not known in advance.
  • 44. Example: Optimal Replacement Algorithm (MIN) RS = c a d b e b a b c d Replace page that will not be referenced for the longest time in the future . c a d b e b a b c d Problem: Reference String not known in advance. 2 page faults Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a b c d a b c d a b c d a b c d a b c d a b c e d b c e e d a b c e a b c e a b c e a b c e d a
  • 45. Random Replacement Program reference string are never know in advance. Without any prior knowledge, random replacement strategy can be applied. Is there any prior knowledge available for common programs? Yes , the locality of reference . Random replacement is simple but without considering such a property.
  • 46. The Principle of Locality Most instructions are sequential Except for branch instruction Most loops are short for -loop while -loop Many data structures are accessed sequentially Arrays files of records
  • 47. FIFO Replacement Algorithm FIFO: Replace oldest page Assume that pages residing the longest in memory are least likely to be referenced in the future. Easy but may exhibit Belady’s anormaly , i.e., Increasing the available memory can result in more page faults .
  • 48. Example: FIFO Replacement Algorithm RS = c a d b e b a b c d c a d b e b a b c d Replace oldest page.            5 page faults Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a b c d a b c d a b c d a b c d a b c d e b c d d a b c e a e b c d e a c d e a b d e a b c d e a b b c c d
  • 49. Example: Belady’s Anormaly RS = dcbadcedcbaecdcbae 17 page faults 14 page faults Time t RS Frame 0 Frame 1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 d c b a d c e d c b a e c d c b a e d d c b c b a d a d c e c e d c d c b a b e b c b c d c d b d b a e a Time t RS Frame 0 Frame 1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 Frame 2 d c b a d c e d c b a e c d c b a e d d c d c a c a d b b b a d c e d c e d c e d c e b c e b a e b a e c a e d a c d a c b a c b a c e a
  • 50. Example: Belady’s Anormaly 14 page faults 15 page faults Time t RS Frame 0 Frame 1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 Frame 2 d c b a d c e d c b a e c d c b a e d d c d c a c a d b b b a d c e d c e d c e d c e b c e b a e b a e c a e d a c d a c b a c b a c e a d c b a d c e d c b a e c d c b a e d d c d c d c b b a d c b a d c b a e c b a e d b a e d c a e d c b a d c b a e c b a e c b a e d b a e d c b e d c b a d c b a e c Time t RS Frame 0 Frame 1 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 0 8 Frame 2 Frame 3
  • 51. Least Recently Used Replacement (LRU) Replace Least Recently Used page Remove the page which has not been referenced for the longest time. Comply with the principle of locality Doesn’t suffer from Belady’s anormaly
  • 52. Example: Least Recently Used Replacement (LRU) RS = c a d b e b a b c d Replace Least Recently Used page. 3 page faults c a d b e b a b c d Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 Queue end Queue head a b c d a b c d a b c d a b c d a b c d a b e d a b d c e c a b e d a b e d a b e d a b e c d e c d d c b a c d b a a c d b d a c b b d a c e b d a b e d a a b e d b a e d c b a e d c b a
  • 53. LRU Implementation Software queue: too expensive Time-stamping Stamp each referenced page with current time Replace page with oldest stamp Hardware capacitor with each frame Charge at reference Replace page with smallest charge n -bit aging register with each frame Set left-most bit of referenced page to 1 Shift all registers to right at every reference or periodically Replace page with smallest value R = R n  1 R n  2 … R 1 R 0
  • 54. Second-Chance Algorithm Approximates LRU Implement use-bit u with each frame Set u =1 when page referenced To select a page: If u ==0 , select page Else, set u =0 and consider next frame Used page gets a second chance to stay in PM Also called clock algorithm since search cycles through page frames .
  • 55. Example: Second-Chance Algorithm To select a page: If u ==0 , select page Else, set u =0 and consider next frame cycle through page frames RS = c a d b e b a b c d 4 page faults c a d b e b a b c d            Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 a /1 b /1 c /1 d /1 e /1 b /0 c /0 d /0 e a e /1 b /1 c /0 d /0 e /1 b /0 a /1 d /0 a c e /1 b /1 a /1 d /0 e /1 b /1 a /1 c /1 c d d /1 b /0 a /0 c /0 d e
  • 56. Third-Chance Algorithm Second chance algorithm does not distinguish between read and write access Write access more expensive Give modified pages a third chance: u -bit set at every reference ( read and write ) w -bit set at write reference to select a page, cycle through frames, resetting bits, until uw ==00 : uw  uw 1 1 0 1 1 0 0 0 0 1 0 0 * (remember modification) 0 0 select Can be implemented by an additional bit.
  • 57. Example: Third-Chance Algorithm RS = c a w d b w e b a w b c d 3 page faults uw  uw 1 1 0 1 1 0 0 0 0 1 0 0 * 0 0 select c a w d b w e b a w b c d            Time t RS Frame 0 Frame 1 Frame 2 Frame 3 IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 a /00 * d /10 e /00 c /00 a /10 b /10 c /10 d /10 a /10 b /10 c /10 d /10 a /11 b /10 c /10 d /10 a /11 b /10 c /10 d /10 a /11 b /11 c /10 d /10 a /00 * b /00 * e /10 d /00 e c a /00 * b /10 * e /10 d /00 a /11 b /10 * e /10 d /00 a /11 b /10 * e /10 d /00 a /11 b /10 * e /10 c /10 c d d b
  • 58. Local Page Replacement Algorithms Measurements indicate that Every program needs a “ minimum ” set of pages If too few , thrashing occurs If too many , page frames are wasted The “ minimum ” varies over time How to determine and implement this “ minimum ”? Depending only on the behavior of the process itself.
  • 59. Local Page Replacement Algorithms Optimal Page Replacement Algorithm (VMIN) The Working Set Model (WS) Page Fault Frequency Replacement Algorithm (PFF)
  • 60. Optimal Page Replacement Algorithm (VMIN) The method Define a sliding window ( t , t +  )  width  + 1  is a parameter (a system constant) At any time t , maintain as resident all pages visible in window Guaranteed to generate smallest number of page faults corresponding to a given window width.
  • 61. Example: Optimal Page Replacement Algorithm (VMIN) RS = c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b d b c e e a a d d
  • 62. Example: Optimal Page Replacement Algorithm (VMIN) RS = c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b d b c e e a a d d
  • 63. Example: Optimal Page Replacement Algorithm (VMIN) By increasing  , the number of page faults can arbitrarily be reduced , of course at the expense of using more page frames. VMIN is unrealizable since the reference string is unavailable. RS = c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b d b c e e a a d d
  • 64. Working Set Model Use trailing window (instead of future window) Working set W ( t ,  ) is all pages referenced during the interval ( t –  , t ) (instead of ( t , t +  ) ) At time t : Remove all pages not in W ( t ,  ) Process may run only if entire W ( t ,  ) is resident
  • 65. Example: Working Set Model RS = e d a c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a b e a d a e d
  • 66. Example: Working Set Model RS = e d a c c d b c e c e a d 5 page faults  = 3 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a b e a d a e d
  • 67. Approximate Working Set Model Drawback: costly to implement Approximations: 1. Each page frame with an aging register Set left-most bit to 1 whenever referenced Periodically shift right all aging registers Remove pages which reach zero from working set. 2. Each page frame with a use bit and a time stamp Use bit is turned on by hardware whenever referenced Periodically do following for each page frame: Turn off use-bit if it is on and set current time as its time stamp Compute turn-off time t off of the page frame Remove the page from the working set if t off > t max
  • 68. Page Fault Frequency (PFF) Replacement Main objective: Keep page fault rate low Basic principle: If the time btw the current ( t c ) and the previous ( t c  1 ) page faults exceeds a critical value  , all pages not referenced during that time interval are removed from memory. The algorithm of PFF: If time between page faults   grow resident set by adding new page to resident set If time between page faults >  shrink resident set by adding new page and removing all pages not referenced since last page fault
  • 69. Example: Page Fault Frequency (PFF) Replacement RS = c c d b c e c e a d 5 page faults  = 2 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a , e e a d a e b , d
  • 70. Example: Page Fault Frequency (PFF) Replacement RS = c c d b c e c e a d 5 page faults  = 2 Time t RS Page a IN t OUT t 0 1 2 3 4 5 6 7 8 9 10 c c d b c e c e a d Page b Page c Page d Page e                                                        c b a , e e a d a e b , d
  • 71. Load Control and Thrashing Main issues: How to choose the degree of multiprogramming ? Decrease ? Increase ? When level decreased , which process should be deactivated ? When a process created or a suspended one reactivated , which of its pages should be loaded ? One or many ? Load Control Policy to set the number & type of concurrent processes Thrashing Most system’s effort pays on moving pages between main and secondary memory, i.e., low CPU utilization .
  • 72. Load control  Choosing Degree of Multiprogramming Local replacement: Each process has a well-defined resident set , e.g., Working set model & PFF replacement This automatically imposes a limit , i.e., up to the point where total memory is allocated. Global replacement No working set concept Use CPU utilization as a criterion With too many processes , thrashing occurs
  • 73. Load control  Choosing Degree of Multiprogramming Local replacement: Each process has a well-defined resident set , e.g., Working set model & PFF replacement This automatically imposes a limit , i.e., up to the point where total memory is allocated. Global replacement No working set concept Use CPU utilization as a criterion With too many processes , thrashing occurs L = mean time between faults S = mean page fault service time
  • 74. Load control  Choosing Degree of Multiprogramming Local replacement: Each process has a well-defined resident set , e.g., Working set model & PFF replacement This automatically imposes a limit , i.e., up to the point where total memory is allocated. Global replacement No working set concept Use CPU utilization as a criterion With too many processes , thrashing occurs L = mean time between faults S = mean page fault service time How to determine the optimum, i.e., N max Thrashing
  • 75. Load control  Choosing Degree of Multiprogramming L = S criterion: Page fault service S needs to keep up with mean time between faults L . 50% criterion: CPU utilization is highest when paging disk 50% busy (found experimentally). Clock load control Scan the list of page frames to find replaced page . If the pointer advance rate is too low, increase multiprogramming level. How to determine N max ?
  • 76. Load control  Choosing the Process to Deactivate Lowest priority process Consistent with scheduling policy Faulting process Eliminate the process that would be blocked Last process activated Most recently activated process is considered least important . Smallest process Least expensive to swap in and out Largest process Free up the largest number of frames
  • 77. Load control  Prepaging Which pages to load when process activated Prepage last resident set
  • 78. Evaluation of Paging Advantages of paging Simple placement strategy No external fragmentation Parameters affecting the dynamic behavior of paged systems Page size Available memory
  • 80. Evaluation of Paging A process requires a certain percentages of its pages within the short time period after activation . Prepaging is important.
  • 81. Evaluation of Paging Smaller page size is beneficial. Another advantage with a small page size: Reduce memory waste due to internal fragmentation . However, small pages require lager page tables .
  • 82. Evaluation of Paging W : Minimum amount of memory to avoid thrashing . Load control is important.