0% found this document useful (0 votes)
2 views

Lecture 6 MSCS

The document discusses memory management in both uni-programming and multiprogramming systems, detailing the binding of instructions and data to memory at compile, load, and execution times. It explains the concepts of logical versus physical address space, virtual memory, and various page replacement algorithms, including FIFO, LRU, and optimal algorithms. Additionally, it addresses the issue of thrashing, which occurs when a system spends excessive time swapping pages, leading to low CPU utilization.

Uploaded by

Ali Shan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 6 MSCS

The document discusses memory management in both uni-programming and multiprogramming systems, detailing the binding of instructions and data to memory at compile, load, and execution times. It explains the concepts of logical versus physical address space, virtual memory, and various page replacement algorithms, including FIFO, LRU, and optimal algorithms. Additionally, it addresses the issue of thrashing, which occurs when a system spends excessive time swapping pages, leading to low CPU utilization.

Uploaded by

Ali Shan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Lecture 6 MSCS (AOS)

LECTURER M HAMAD EJAZ


MEMORY MANAGEMENT

 In a uni-programming system, main memory is divided into two


parts: one part for the operating system (resident monitor,
kernel) and one part for the user program currently being
executed. In a multiprogramming system, the “user” part of
memory must be further subdivided to accommodate multiple
processes. The task of subdivision is carried out dynamically by
the operating system and is known as memory management.
Binding of Instructions and Data to Memory

 Address binding of instructions and data to memory addresses


can happen at three different stages.
 1. Compile time: The compile time is the time taken to compile
the program or source code. During compilation, if memory
location known a priori, then it generates absolute codes.
Binding of Instructions and Data to Memory

 2. Load time: It is the time taken to link all related program file
and load into the main memory. It must generate relocatable
code if memory location is not known at compile time.
 3. Execution time: It is the time taken to execute the program
in main memory by processor. Binding delayed until run time if
the process can be moved during its execution from one memory
segment to another. Need hardware support for address maps
(e.g., base and limit registers).
Binding of Instructions and Data to Memory

(Multistep processing of a user


program.)
Logical- Versus Physical-Address Space

 ⇒ An address generated by the CPU is commonly referred to


as a logical address or a virtual address whereas an
address seen by the main memory unit is commonly referred
to as a physical address.
 ⇒ The set of all logical addresses generated by a program is a
logical-address space whereas the set of all physical
addresses corresponding to these logical addresses is a
physical address space.
 ⇒ Logical and physical addresses are the same in compile-
time and load-time address binding schemes; logical (virtual)
and physical addresses differ in execution-time address
binding scheme.
Logical- Versus Physical-
Address Space
 ⇒ The Memory Management Unit is a hardware device that maps
virtual to physical address. In MMU scheme, the value in the
relocation register is added to every address generated by a user
process at the time it is sent to memory as follows:
VIRTUAL MEMORY

 Virtual memory is a technique that allows the execution of


processes that may not be completely in memory. Only part of
the program needs to be in memory for execution. It means that
Logical address space can be much larger than physical address
space. Virtual memory allows processes to easily share files and
address spaces, and it provides an efficient mechanism for
process creation.
VIRTUAL MEMORY

(Diagram showing virtual memory that is larger than


physical memory)
Virtual memory can be implemented via

  Demand paging
  Demand segmentation

 DEMAND PAGING
 A demand-paging system is similar to a paging system with swapping.
Generally, Processes reside on secondary memory (which is usually a disk).
When we want to execute a process, we swap it into memory. Rather than
swapping the entire process into memory, i swaps the required page. This can
be done by a lazy swapper. A lazy swapper never swaps a page into memory
unless that page will be needed. A swapper manipulates entire processes,
whereas a pager is concerned with the individual pages of a process.
Page transfer Method:

 When a process is to be swapped in, the pager guesses which


pages will be used before the process is swapped out again.
Instead of swapping in a whole process, the pager brings only
those necessary pages into memory. Thus, it avoids reading into
memory pages that will not be used anyway, decreasing the
swap time and the amount of physical memory needed.
Page transfer Method:
PAGE REPLACEMENT

 The page replacement is a mechanism that loads a page from disc to memory when a
page of memory needs to be allocated. Page replacement can be described as follows:
 1. Find the location of the desired page on the disk.
 2. Find a free frame:
 a. If there is a free frame, use it.
 b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
 c. Write the victim page to the disk; change the page and frame tables accordingly.
 3. Read the desired page into the (newly) free frame; change the page and frame tables.
 4. Restart the user process.
Page Replacement
Algorithms:
 The page replacement algorithms decide which
 memory pages to page out (swap out, write to disk) when a page of
memory needs to be allocated. We evaluate an algorithm by running it
on a particular string of memory references and computing the number
of page faults. The string of memory references is called a reference
string.
 The different page replacement algorithms are described as follows:
 1. First-In-First-Out (FIFO) Algorithm:
 A FIFO replacement algorithm associates with each page the time when
that page was brought into memory. When a page must be replaced, the
oldest page is chosen to swap out. We can create a FIFO queue to hold
all pages in memory. We replace the page at the head of the queue.
When a page is brought into memory, we insert it at the tail of the queue
Example
Optimal Page Replacement algorithm:

 One result of the discovery of Belady's anomaly was the search


for an optimal page replacement algorithm. An optimal
page-replacement algorithm has the lowest page-fault rate of all
algorithms, and will never suffer from Belady's anomaly. Such an
algorithm does exist, and has been called OPT or MIN. It is simply
“Replace the page that will not be used for the longest period of
time”. Use of this page-replacement algorithm guarantees the
lowest possible pagefault rate for a fixed number of frames
Example
LRU Page Replacement algorithm

 If we use the recent past as an approximation of the near future,


then we will replace the page that has not been used for the
longest period of time. This approach is the leastrecently- used
(LRU) algorithm. LRU replacement associates with each page the
time of that page's last use. When a page must be replaced, LRU
chooses that page that has not been used for the longest period
of time.
Example
LRU Approximation Page Replacement
algorithm

 In this algorithm, Reference bits are associated with each entry in


the page table. Initially, all bits are cleared (to 0) by the
operating system. As a user process executes, the bit associated
with each page referenced is set (to 1) by the hardware. After
some time, we can determine which pages have been used and
which have not been used by examining the reference bits. This
algorithm can be classified into different categories as follows:
Additional-Reference-Bits
Algorithm: and Second-Chance
Algorithm:
i. Additional-Reference-Bits Algorithm: It can keep an 8-bit(1 byte) for each
page in a page table in memory. At regular intervals, a timer interrupt transfers
control to the operating system. The operating system shifts the reference bit for
each page into the highorder bit of its 8-bit, shifting the other bits right over 1 bit
position, discarding the low-order bit. These 8 bits shift registers contain the history
of page use for the last eight time periods. If we interpret these 8-bits as unsigned
integers, the page with the lowest number is the LRU page, and it can be replaced.
 ii. Second-Chance Algorithm: The basic algorithm of second-chance
replacement is a FIFO replacement algorithm. When a page has been selected, we
inspect its reference bit. If the value is 0, we proceed to replace this page. If the
reference bit is set to 1, we give that page a second chance and move on to select
the next FIFO page. When a page gets a second chance, its reference bit is cleared
and its arrival time is reset to the current time. Thus, a page that is given a second
chance will not be replaced until all other pages are replaced.
Counting-Based Page Replacement

 We could keep a counter of the number of references that have


been made to each page, and develop the following two
schemes.
 i. LFU page replacement algorithm: The least frequently
used (LFU) page replacement algorithm requires that the
page with the smallest count be replaced. The reason for this
selection is that an actively used page should have a large
reference count.
 ii. MFU page-replacement algorithm: The most frequently
used (MFU) page replacement algorithm is based on the
argument that the page with the largest count be replaced.
THRASHING

 The system spends most of its time shuttling pages between


main memory and
 secondary memory due to frequent page faults. This behavior is
known as thrashing.
 A process is thrashing if it is spending more time paging than
executing. This leads to:
 low CPU utilization and the operating system thinks that it needs
to increase the degree of
 multiprogramming.
THRASHING

You might also like