0% found this document useful (0 votes)
64 views116 pages

Main Memory Management

The memory manager is responsible for efficiently allocating memory to processes. One allocation strategy is fixed partitioning, where memory is divided into equal or unequal fixed partitions that are assigned to processes. This can cause internal fragmentation from unused space within partitions and external fragmentation from partitions being too small for some processes. The best-fit strategy assigns processes to the smallest sufficient partition to reduce fragmentation.

Uploaded by

Dann Laurte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views116 pages

Main Memory Management

The memory manager is responsible for efficiently allocating memory to processes. One allocation strategy is fixed partitioning, where memory is divided into equal or unequal fixed partitions that are assigned to processes. This can cause internal fragmentation from unused space within partitions and external fragmentation from partitions being too small for some processes. The best-fit strategy assigns processes to the smallest sufficient partition to reduce fragmentation.

Uploaded by

Dann Laurte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

MAIN MEMORY

MANAGEMENT

SCHOOL OF COMPUTING RIEL A. GOMEZ


INTRODUCTION

• The memory manager is the component of the


operating system that is responsible in ensuring that
memory is shared in an efficient and error-free manner.

• Issues the memory manager has to contend with are:

1. It must guarantee that the memory spaces of


processes are protected.

2. It must make sure that each process must be


allotted enough memory space so that it may
proceed with its execution.

It must also deal with situations wherein a process


might need a larger memory space than what is
available.

Main Memory Management


INTRODUCTION

3. It must know what memory


locations are being used by each
process and what parts of memory
are free for allocation to incoming
processes.

4. If the main memory is too small to


accommodate all processes, the
memory manager must be able to
swap processes or parts of
processes in and out of memory
to ensure that all processes get a
chance to execute.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

• One of the basic memory management OPERATING


strategies used by operating systems is on 4 MB
SYSTEM
allocating main memory space to several
processes. 4 MB

4 MB
• One such strategy of memory allocation uses
fixed partitioning. 4 MB

• In this strategy, the main memory is divided 4 MB


into a fixed number of regions or partitions.
4 MB

• And the simplest way to do this is to 4 MB


segregate the memory into equal-sized
partitions. 4 MB

• A partition can then be allocated to one 32 MB Main Memory


process.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

OPERATING
4 MB
• For example, a 32-MB main SYSTEM
memory can be divided into 8 4 MB
equal-sized partitions. 4 MB

4 MB
• Each partition has a size of 4 MB.
4 MB

• The operating system will use 4 MB

one of the partitions (usually the 4 MB


first one) while the remaining 7 4 MB
are free for allocation to
incoming processes. 32 MB Main Memory

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

OPERATING
• Take note that partitions or SYSTEM
4 MB
boundaries are fixed. 4 MB

4 MB
• So once a process is loaded into
4 MB
a partition, any unused memory
locations within that partition 4 MB
cannot be assigned to other
4 MB
processes.
4 MB

• In other words, a partition 4 MB


cannot be shared by two or
more processes. 32 MB Main Memory

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

• Problems that are associated with


the fixed-partition memory
allocation strategy:

– A small process that is assigned


a partition will not use up all of
the memory locations of that
partition.

The unused portion (wastage


of memory) is called internal
fragmentation.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

The amount of internal


fragmentation is the
difference between the
size of the partition and
the size of process 3-MB Process
occupying it. 4 MB
Partition
For example, if a 3-MB
process is loaded in a 4 1-MB Unused
MB partition. Portion

It would result in an internal


internal fragmentation of fragmentation
4 MB – 3 MB = 1 MB.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

– A process which is larger


than the partition size
will never be loaded into
main memory.

4-MB Partition 5-MB Process


This would force the
programmer to redesign
his program or use other
programming
techniques to reduce
the size of the process.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

• To minimize these problems, OPERATING


4 MB
a better alternative is to SYSTEM
divide the memory into 2 MB
2 MB
fixed, unequal-sized 3 MB
partitions.
4 MB

• In the previous example, the 4 MB


32-MB memory is still 5 MB
divided into 8 partitions but
this time of unequal sizes.
8 MB

• This will allow the memory


to accommodate processes 32 MB Main Memory
of various sizes.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

Since the partitions have unequal sizes, the operating


system must now have a strategy in selecting which
partition will be allocated to an incoming process.

The operating system will always select the smallest


partition in which the incoming process will fit.

For example, assume that there is an incoming 5-MB


process. If the available partitions have sizes of 4 MB, 6 MB,
and 7 MB, then the operating system will assign the process
to the 6-MB partition.

This strategy is often called the best-fit available strategy.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

Aside from the problem of internal fragmentation, memory is also


wasted because of external fragmentation.

Consider the following scenario:

Queue of Incoming OPERATING


4 MB
Processes SYSTEM
5 MB 5 MB 2 MB 3 MB
P3 P2 P1
4 MB

5 MB

16 MB Main Memory

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

Following the best-fit available strategy, process P1, which has a size
of 2 MB, will be assigned to the 3-MB partition.

The 5-MB process P2 will be assigned to the 5-MB partition

Queue of Incoming OPERATING


4 MB
Processes SYSTEM
5 MB 5 MB 2 MB 3 MB
P3 P2 P1
4 MB

5 MB

16 MB Main Memory

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

Process P3, which is also 5 MB in size, cannot be loaded into


memory since it will not fit in the last remaining partition since it
only has a size of 4 MB. It therefore has to wait for P2 to finish
executing and free up the 5-MB partition (external fragmentation).

Queue of Incoming OPERATING


4 MB
Processes SYSTEM
5 MB 5 MB 2 MB 3 MB
P3 P2 P1
4 MB

5 MB

16 MB Main Memory

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

• Another problem is the choice of


partition sizes. The various sizes of the
partitions will greatly affect the
amount of internal and external
fragmentation of the system.

If the processes are all small, internal


fragmentation will be quite large. If
the processes are quite large, then
external fragmentation may occur.

There is no proper way to partition


memory that can maximize the use of
the main memory all the time.

Main Memory Management


MEMORY ALLOCATION (FIXED PARTITION)

• The last problem is that the


number of partitions put a limit on
the degree of multiprogramming.

If CPU utilization is low, the


operating system cannot increase it
any further by increasing the
degree of multiprogramming since
there are no more partitions
available.

In the previous example, the


degree of multiprogramming is
fixed at 7 since there are only 7
partitions for processes.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• Instead of dividing memory into fixed


partitions, variable partitioning (also
called dynamic partitioning) allocates the
exact memory space needed by each
process.

• In this way, there will be a variable


number of partitions (and processes) in
memory.

• Therefore, there is no fixed limit on the


degree of multiprogramming. The actual
number of programs that can be
multiprogrammed depends on the size of
the main memory and the sizes of the
processes.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• To demonstrate how variable partitioning works, assume


that a computer system has a 32-MB main memory with
the operating system occupying the first 4 MB.

• Assume further the following processes which are inside


the job queue (waiting to be loaded to the main memory):

Process Size CPU Burst


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11
P4 5 MB 7

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 hole

P4 5 MB 7
28 MB
This is initial state of the main memory.

The operating system is occupying the first


4 MB locations thereby giving a total of 28
MB of free memory space (called a hole)
for allocation to incoming processes.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7
28 MB
Process P1 will be loaded into the memory.
It can occupy the first 12 MB of the
available 28 MB space. 16 MB

This will leave a 16-MB hole that can be


allocated to other processes.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7
28 MB
P2 can now enter the main memory. The P2 7 MB
first 7 MB of the 16-MB hole can now be
assigned to process P2. 16 MB

9 MB
This will leave a hole with a 9 MB capacity
for the remaining processes.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7
28 MB
P2 7 MB
When P3 is loaded next, it will occupy the
first 8 MB of the 9-MB hole. 16 MB
P3 9
8 MB
There is now a 1-MB hole left for
allocation. 1 MB
(b)

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7

At this point, P4 cannot be allocated any P2


28 MB
7 MB
memory space since the 1-MB hole
available is too small for P4. 16 MB
P3 9
8 MB
So it has to wait until one of the processes
inside the memory finishes executing and 1 MB
releases its memory allocation. (b)

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7

After 8 time units, P2 would have finished P2


28 MB
7 MB
executing. The operating system will now
de-allocate its memory space. 16 MB
P3 9
8 MB
This would create a new hole whose size is
7 MB. There are now two holes available 1 MB
for allocation. (b)

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7

P4 28 MB
5 MB
Since there is now enough space, the 7- P2 7 MB
MB hole can now be assigned to P4. 2 MB
16 MB
P3
Since the size of P4 is only 5 MB, this will 9 MB
8
leave a 2-MB hole in addition to the 1-MB 1 MB
hole. (b)
(c)

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• From the preceding example, it can be seen that


there is no internal fragmentation in variable
partitioning.

This is because the operating system assigns only


the exact memory space needed by each process.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Process Size CPU Burst OS 4 MB


P1 12 MB 10
P2 7 MB 8
P3 8 MB 11 P1 12 MB

P4 5 MB 7
28 MB
However, there is still external fragmentation. P2 7 MB

In the example given, there is a 1-MB hole that 16 MB


can be allocated to any incoming process. But P3 8
9 MB
since it is too small for P4, the process has to
wait for more memory space to become 1 MB
available. (b)
(c)

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• There will be situations


wherein there will be
several holes available
that can be assigned to a
process.

• To keep track of these


holes, the operating
system maintains a list of
all holes within main
memory (free hole list).
Main Memory Management
MEMORY ALLOCATION (VARIABLE PARTITION)

• As in fixed partitioning using unequal-sized partitions, the


operating system must also have a strategy to help it
decide into which hole a process will be placed.

In variable partitioning, there are three such placement


strategies:

1. First-Fit Strategy. The operating system scans main


memory from the beginning and the first hole
encountered that is large enough for the incoming
process will be allocated.

This is the fastest among the three strategies.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

2. Best-Fit Strategy. The operating system


searches the list of holes for the smallest hole
that is large enough for the incoming process
thereby producing the smallest remaining hole.

This strategy produces the smallest remaining


hole.

One problem of this strategy is that the


operating system has to search the entire list of
holes in order to find the best-fit hole.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

3. Worst-Fit Strategy. This is the opposite of the


best-fit strategy wherein the operating system
searches the list of holes for the largest hole.

This strategy produces the largest remaining


hole.

The operating system also has to search the


entire list of holes in order to find the worst-fit
hole.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

Example: OPERATING
4 MB
SYSTEM
If the first-fit strategy is used, P1 4 MB
then the incoming 2-MB process
P4 will be assigned to the 4-MB 4 MB
hole since it is the first available 2 MB P2 3 MB
hole large enough to P4
accommodate P4. 3 MB
Incoming
Process P3 8 MB
If best-fit is used, then P4 will be
placed in the 3-MB hole.
6 MB
And if worst-fit is used, then P4
will be assigned to the 6-MB
32 MB Main Memory
hole.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• Based on simulations, the


worst-fit strategy performed
the worst in terms of speed
and external fragmentation.

The first-fit and best-fit


strategies have the same
performance in terms of
external fragmentation.

But first-fit clearly has the best


performance in terms of speed.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• External fragmentation is mainly due to


the fact that several holes may be
scattered throughout memory and none OPERATING
4 MB
OPERATING
4 MB
is big enough to allow an incoming SYSTEM SYSTEM
process to be loaded into main memory. P1 4 MB P1 4 MB

4 MB P2 3 MB

• The problem is that the free space is not P2 3 MB


P3 8 MB
contiguous. 3 MB

• To solve this problem, the operating P3 8 MB


system must regularly perform 13 MB
compaction. 6 MB

• Compaction is moving the processes After Compaction


upwards in main memory so that the free
memory locations may be grouped
together in one large block.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• The problem with


compaction is that it involves
moving processes to
different memory locations
during execute time.

This means that the


operating system must be
able to re-assign addresses
during run-time.

Compaction is also time-


consuming particularly if it
involves moving hundreds of
processes.

Main Memory Management


MEMORY ALLOCATION (VARIABLE PARTITION)

• A better solution to the


problem of external
fragmentation is to allow a
process to occupy non-
contiguous memory locations.

• If the process can be divided


into smaller portions, then
these portions may be able to
fit in the different holes
scattered throughout main
memory.

• This solution is called paging.

Main Memory Management


PAGING

• Paging is a memory management scheme in which a process is


allowed to occupy non-contiguous memory space.

• Processes are divided into equal-sized blocks called pages while


the main memory is also divided into equal-sized blocks called
frames.

• The size of a page is equal to the size of a frame so that a page will
exactly fit in a frame.

• The idea behind paging is to load the pages of a process into


different frames in main memory without worrying if the entire
process will occupy contiguous memory locations.

Main Memory Management


PAGING

Process 1
• As an example,
assume that there are
two processes and an
empty main memory.

Process 2

Main Memory

Main Memory Management


PAGING
frame 0
page 0 P0-1
frame 1
page 1 P1-1

page 2 P2-1
frame 2
• Process P1 is divided
page 3 P3-1
frame 3

frame 4
into 4 pages (pages 0
Process 1
frame 5
to 3).
frame 6
page 0

page 1
P0-2

P1-2
frame 7 • Process P2 is divided
page 2 P2-2
frame 8
into 8 pages (pages 0
page 3 P3-2
frame 9
to 7).
frame 10
page 4 P4-2
frame 11
page 5 P5-2
frame 12 • The main memory is
page 6 P6-2
frame 13 divided into 16 frames
page 7 P7-2

Process 2
frame 14 (frames 0 to 15).
frame 15

Main Memory

Main Memory Management


PAGING
frame 0
page 0 P0-1
P1-1 frame 1
page 1 P1-1
frame 2
page 2 P2-1
frame 3
page 3 P3-1
frame 4 • Instead of loading P1 into
Process 1
frame 5
contiguous memory locations,
its pages can be loaded into
P0-1 frame 6
page 0 P0-2
any available frame in
frame 7 memory.
page 1 P1-2
P3-1 frame 8
page 2 P2-2
frame 9 Page P0-1 goes to frame 6
page 3 P3-2
frame 10 Page P1-1 goes to frame 1
page 4 P4-2
frame 11 Page P2-1 goes to frame 12
page 5 P5-2
P2-1 frame 12 Page P3-1 goes to frame 8
page 6 P6-2
frame 13
page 7 P7-2
frame 14
Process 2
frame 15

Main Memory

Main Memory Management


PAGING
P1-2 frame 0
page 0 P0-1
P1-1 frame 1
• The same can be done for
page 1 P1-1 process P2.
frame 2
page 2 P2-1
P0-2 frame 3
page 3 P3-1 Page P0-2 goes to frame 3
frame 4
Process 1
Page P1-2 goes to frame 0
P2-2 frame 5
Page P2-2 goes to frame 5
P0-1 frame 6
page 0 P0-2
Page P3-2 goes to frame 14
P4-2 frame 7
page 1 P1-2
Page P4-2 goes to frame 7
P3-1 frame 8
page 2 P2-2
Page P5-2 goes to frame 11
P6-2 frame 9
page 3 P3-2
Page P6-2 goes to frame 9
frame 10
page 4 P4-2
Page P7-2 goes to frame 13
P5-2 frame 11
page 5 P5-2

page 6 P6-2
P2-1 frame 12
• The operating system
P7-2 frame 13 maintains a list of what frames
page 7 P7-2
P3-2 frame 14 are free or available for
Process 2
frame 15
allocation. This is called the
free frame list.
Main Memory

Main Memory Management


PAGING

• Since the pages of a process may now be scattered


all throughout the memory, the operating system
must have a mechanism to keep track on where it
has placed these pages.

• This mechanism is in the form of a page table.

• The operating system maintains a page table for


every process in memory. The page table contains
information on what memory frames the pages of a
process are assigned.

Main Memory Management


PAGING
P1-2 frame 0
page 0 P0-1
P1-1 frame 1
page 1 P1-1
frame 2
page 2 P2-1
P0-2 frame 3 0 3
page 3 P3-1
frame 4 A page table 1 0
Process 1
P2-2 frame 5 is indexed by
P0-1 frame 6
page number. 2 5
page 0 P0-2
P4-2 frame 7
page 1 P1-2 3 14
P3-1 frame 8
page 2 P2-2
P6-2 frame 9 0 6 4 7
page 3 P3-2
frame 10
page 4 P4-2
1 1 5 11
P5-2 frame 11
page 5 P5-2
P2-1 frame 12 2 12 6 9
page 6 P6-2
P7-2 frame 13
page 7 P7-2
3 8 7 13
P3-2 frame 14
Process 2 Page Table Page Table
frame 15
for for
Main Memory Process P1 Process P2

Main Memory Management


PAGING
• Because of paging, external fragmentation
has been greatly minimized since a process 0
now is allowed to occupy non-contiguous 1
memory locations.
2
. . First
• However, there is still internal 1,000
fragmentation. But it is only a small . . pages
fraction of the last page of any process. . .
998
For example, assume that there is a 999
process whose size is 64,050 bytes. If the
page size is 64 bytes, then the process will 1000 Last Page
have a total of

64050/64 = 1,000.8  1,001 pages. However, the last page will only
have 50 bytes in it therefore
leaving 14 bytes unoccupied. This
The first 1,000 pages will be completely results in an internal
filled with the first 64,000 bytes of the fragmentation of 14 bytes.
process.

Main Memory Management


PAGING

• If the process size is 64,001 bytes, the last page will only
have one byte in it resulting in an internal fragmentation of
63 bytes, which is the worst case scenario for this process.

• In general, if the page size is n bytes, the maximum internal


fragmentation will be n -1 bytes.

• It should be emphasized again that internal fragmentation


only occurs at the last page of every process.

• So one issue that should be addressed in paging is


determining what the size of a page (or a frame) should be.

Main Memory Management


PAGING

• The advantage of having a


small page size is a reduction
in internal fragmentation since
only a few bytes will be wasted
if a page is not entirely filled
up.

• However, the disadvantage is


that a process may have too
many pages thus making its
page table quite large.
Having too many pages will
also require a relatively larger
If page tables are stored in amount of time to transfer
main memory, then they may them from the hard disk to
occupy a significant amount of the main memory.
memory space.

Main Memory Management


PAGING

• If the page size is too large, then


an increase in internal
fragmentation is expected.

However, large page sizes result in


smaller page tables since each
process will only have a small
number of pages.

Transferring a small number of


In practice, page sizes may pages from hard disk to main
range from 512 bytes to 16 MB memory is faster because of
depending on the computer having less overhead due to
architecture. latency and seek time.

Main Memory Management


PAGING

• Since page tables are needed for every instruction reference, they
must be stored in such a way that they can be accessed fast,
otherwise system throughput will drastically go down.

• The following are the options in the implementation of page


tables:

1. Dedicated Registers. The simplest and most efficient option is


to store the page tables in a dedicated set of registers.

This option offers very fast access times. However, page tables
often contain a very large number of entries thus making this
option very expensive.

Main Memory Management


PAGING

2. Main Memory. Another option is to keep the


page tables in main memory.

Although this option is not as expensive as using


dedicated registers, it is extremely slow.

This is because two memory accesses will


always be needed every time an instruction is
fetched.

This effectively increases memory access time


by a factor of 2.
Main Memory Management
PAGING
3. Cache Memory. The third and more popular option is
to use a small but fast cache memory called translation
look-aside buffer (TLB) to store the most recently used
page table entries.

Take note that the entire page table is actually stored in


the cache memory.

Only the entries that have been recently used are


copied in the TLB.

This scheme is much faster compared to using the main


memory alone and is not as expensive as using
dedicated registers.

Main Memory Management


PAGING
P1-2 frame 0

P1-1 frame 1 • There can still be external


frame 2 fragmentation in paging if
P0-2 frame 3 there are not enough frames
page 0 P0-3 frame 4 to hold all the pages of a
page 1 P1-3 P2-2 frame 5 process.
page 2 P2-3 P0-1 frame 6
page 3 P3-3 P4-2 frame 7
page 4 P4-3 P3-1 frame 8
• In the example given, Process
page 5 P5-3 P6-2 frame 9 3 has 8 pages.
page 6 P6-3 frame 10
page 7 P7-3

Process 3
P5-2

P2-1
frame 11

frame 12
• But since there are only 4
P7-2 frame 13
frames that are unused,
P3-2 frame 14
Process 3 cannot be loaded
frame 15
into the main memory and
Main Memory
therefore cannot execute.

Main Memory Management


VIRTUAL MEMORY

• Virtual memory is an
extension of paging
which allows a process
to execute even though
not all of its pages are
inside the main memory.

• In other words, the


entire process does not
have to be loaded into
the main memory in
order to execute

Main Memory Management


VIRTUAL MEMORY

• Take a spreadsheet program for


example.

Most spreadsheet programs have


functions that perform special
computations such as statistical
functions, financial functions,
mathematical functions, etc.

Typical spreadsheet users do not use


all of these functions so when the
spreadsheet process is called, only
those functions that are used will be
loaded into the main memory.

Main Memory Management


VIRTUAL MEMORY
• The main driving force behind virtual memory is a
concept called locality of reference.

• Analysis of programs shows that most of their


execution time is spent on routines in which many
instructions are executed repeatedly.

• These instructions may constitute a simple loop,


nested loops, or a few procedures or functions that
repeatedly call each other.

• In other words, locality of reference states that


many instructions in a few localized areas of the
program are repeatedly executed during some
period of time and that the remainder of the
program is accessed relatively infrequently.

Main Memory Management


VIRTUAL MEMORY

• This led to the idea that a process may still be


executed without requiring all of its pages to be
loaded into the main memory at the same time.

• Another way of stating the locality of reference is


that once an instruction or data is referenced or
used, the chances of it being used again in the very
near future is very high.

This means that pages that are recently referenced


by a process are most likely to be used again.

So only these pages are the ones that are needed


inside the main memory for a certain period of
time.

Main Memory Management


VIRTUAL MEMORY

• In virtual memory, the operating


system loads only the pages of a
process that are needed.

The remaining pages are kept in a


special area in the hard disk and
will only be loaded when needed.

So the computer system virtually


has a very large memory since
part of the hard disk is used as an
"extension" of the main memory.

Main Memory Management


VIRTUAL MEMORY

• The advantages of this are:

1. More processes can be placed into the


main memory since processes require less
memory space.

This will increase the degree of


multiprogramming leading to an increase
in CPU utilization.

2. Programmers may now write programs


that are larger than the main memory.

The theoretical limit to the size of a


program is equal to the size of the main
memory plus the capacity of the hard disk
of the system.

Main Memory Management


VIRTUAL MEMORY

• So in virtual memory, all pages


of a process are initially stored
in the hard disk.

• A page will be brought or


copied into the main memory
only if it is needed.

• This is also called demand


paging.

Pages are loaded into memory


only upon demand.

Main Memory Management


VIRTUAL MEMORY

• Take note that the pages that are inside the main
memory are just copies of the pages in the hard
disk.

• When a page is loaded into the main memory, it is


simply copied to the memory and not physically
removed from the hard disk.

• The group of pages of a process that are in main


memory is called the working set of that process.

Main Memory Management


THE VIRTUAL MEMORY PAGE TABLE
page 0 P0-1
page 1 P1-1
• Since not all of the pages of a page 2 P2-1 frame 0
process are loaded into memory, page 3 P3-1 frame 1
the page table must have some page 4 P4-1 P3-1 frame 2
information on which pages are in page 5 P5-1 frame 3
the main memory. page 6 P6-1 frame 4

page 7 P7-1 frame 5

• The page table in virtual memory Process P1


valid-invalid
frame 6
P6-1 frame 7
has a valid-invalid bit for each bit
frame 8
entry.
frame 9
0 13 1
P4-1 frame 10
1 -- 0
• If the valid-invalid bit of a page 2 -- 0 frame 11
entry is 1, then this means that this 3 2 1 frame 12
particular page is inside the main 4 10 1 P0-1 frame 13
memory. 5 -- 0 frame 14
frame 15
6 7 1
Main
If it is 0, then the page is in the 7 -- 0
Memory
hard disk. Page Table
for
Process P1

Main Memory Management


THE VIRTUAL MEMORY PAGE TABLE
page 0 P0-1
page 1 P1-1
• So every time the CPU needs to page 2 P2-1 frame 0

execute instructions in a page, the page 3 P3-1 frame 1

main memory unit hardware page 4 P4-1 P3-1 frame 2

checks first if the page is inside the page 5 P5-1 frame 3


frame 4
main memory by looking at its page 6 P6-1
frame 5
valid-invalid bit. page 7 P7-1
frame 6
Process P1
valid-invalid
P6-1 frame 7
bit
• If this bit is equal to 1, then the frame 8
page is inside the main memory 0 13 1 frame 9
and execution proceeds. 1 -- 0 P4-1 frame 10
frame 11
2 -- 0

frame 12
However, if the bit is equal to 0, 3 2 1
P0-1 frame 13
then the page needed is not in the 4 10 1
frame 14
main memory. 5 -- 0
frame 15
6 7 1
7 -- 0 Main
• This is called a page fault. Page Table
Memory

for
Process P1

Main Memory Management


PAGE FAULTS

• A page fault occurs when the CPU needs to


execute an instruction which is in a page that is
not inside the main memory (it is in the secondary
storage).

• The operating system will now take the following


steps to bring the desired page into the main
memory.

1. The operating system will have to find a free


frame in the main memory for this new page.

If there is no free frame available, the


operating system will have to remove one of
the pages in memory to give way for the new
incoming page.

Main Memory Management


PAGE FAULTS

2. Once a free frame is


available, the operating
system will locate the
desired page in the hard disk
and schedule it to be
transferred to the main
memory.

3. Once the page had been


transferred to the main
memory, the page table of
the process is updated to
reflect this development.

The process which caused the


page fault may now resume its
execution.

Main Memory Management


PAGE FAULTS

• The time needed to take care of a page


fault is called the page fault time.

• The operating system must make sure


that page faults are minimized so as not
to slow down the over-all system
performance.

• The more frames there are for a process,


the lower the page fault rate will be.

This is because if there are more frames,


then there will be more pages that can
be loaded into the main memory.

Main Memory Management


FRAME ALLOCATION

• Since there are several process, the


question now is how many frames
will be allocated to each process?

• The simplest frame allocation


technique is to divide the frames
equally among all processes.

• This is called the equal allocation


technique.

• If there are 2,048 main memory


frames and 8 processes, then each
process will be allocated 2048 / 8 =
256 frames.

Main Memory Management


FRAME ALLOCATION

• The problem with this technique is that


small processes may be given too many
frames while the large processes may
not have enough.

For example, if a process only has 100


pages, allocating it 256 frames means
that 156 of those frames will be wasted.

• On the other hand, if a process has


1,000 pages, giving it 256 frames means
that only about 25% of its pages will be
inside the main memory at any one
time.

This may result in a high page fault rate


thereby causing the process to execute
slowly.

Main Memory Management


FRAME ALLOCATION

• A better alternative is to
use the proportional
allocation technique.

• The memory frames are


allocated to processes
according to their sizes.

• Smaller processes get


fewer frames while
bigger process get more
frames.
Main Memory Management
FRAME ALLOCATION

• As an example, assume that there are three processes. Process P1 has 1,000
pages, Process P2 has 3,000 pages, and process P3 has 6,000 pages.

The total number of pages is 1,000 + 3,000 + 6,000 = 10,000 pages.

The allocation of each process now depends on the proportion of a process


relative to the total number of pages.

Computing for the three processes:

Proportion of P1 = 1000 / 10000 = 0.1

Proportion of P2 = 3000 / 10000 = 0.3

Proportion of P3 = 6000 / 10000 = 0.6

Main Memory Management


FRAME ALLOCATION

The above computations show that P1 has 10% of the total pages,
P2 has 30%, and P3 has 60%.

The proportional allocation technique states that P1, P2, and P3


should be allocated 10%, 30%, and 60% of the available frames
respectively.

If there are 500 frames:

No. of Frames Allocated to P1 = 0.1 × 500 = 50 frames

No. of Frames Allocated to P2 = 0.3 × 500 = 150 frames

No. of Frames Allocated to P3 = 0.6 × 500 = 300 frames

Main Memory Management


PAGE REPLACEMENT

• If a page fault occurs, the operating system must load the


requested page in the main memory.

• If there are no more free frames available for allocation, the


operating system must remove one of the existing pages in
memory to free up space for the new page (page replacement).

• If the page to be removed was not modified, it can simply be


overwritten by the new incoming page since there is an exact copy
of it in the hard disk.

• However, if the selected page has been modified, that page has to
be written back to the hard disk to update the copy there.

Main Memory Management


PAGE REPLACEMENT

• The page table in virtual


memory has a dirty bit for
each entry to indicate if a
page has been modified or
not.

• If this bit is 1, then the


page has been updated or
modified.

• Otherwise, this bit is equal


to 0.
Main Memory Management
PAGE REPLACEMENT

• For example:
valid-invalid
bit dirty bit
• If page 3 was selected for
replacement, it has to be
written back to the hard
0 13 1 0 disk since its dirty bit is 1,
1 -- 0 -- which means it was
2 -- 0 -- modified.
3 2 1 1
4 10 1 0 • If pages 0, 4, or 6 were to
5 -- 0 -- be replaced, they can
6 7 1 0 simply be overwritten
7 -- 0 -- since their dirty bits are all
0s (there were not
Page Table
modified).

Main Memory Management


PAGE REPLACEMENT

• The major issue


concerning page
replacement is
selecting which among
the several pages in
main memory will be
selected for
replacement.

Main Memory Management


PAGE REPLACEMENT

• For example, suppose that a process needed a page that was not in
main memory (a page fault occurs).

– Assume that the operating system responds to this page fault


by randomly selecting a page to be removed to make space for
the requested page.

– Once the requested page is inside the memory, the process


may continue executing.

– During the next reference, it may happen that the process may
request for the page that was recently removed.

– This would cause another page fault.

Main Memory Management


PAGE REPLACEMENT

• If this keeps on happening, the


CPU will be spending more time
taking care of page faults rather
than doing productive work
(executing processes).

• This situation is called thrashing


and the CPU efficiency may drop
to an extremely low value.

• Aside from wrong page


replacement decisions, thrashing
also occurs if the processes
being executed have too few
frames (because there are too
many processes).

Main Memory Management


PAGE REPLACEMENT ALGORITHMS

• It is therefore very critical that


the page replacement decisions
must ensure that the page fault
rate is minimized.

• Otherwise, this will affect the


performance of the entire
computer system.

• There are existing page


replacement algorithms that an
operating system may use to
determine the right page to be
replaced.

Main Memory Management


PAGE REPLACEMENT ALGORITHMS
• For the purpose of comparing the different algorithms,
each one will be evaluated by tracing or simulating it using
a page reference pattern.

• A page reference pattern is simply a trace or sequence of


pages that are referenced by a certain program.

• For example, if the reference pattern is 3, 2, 3, 1, 2, 5, 7, 6,


then this means that page 3 was referenced first, followed
by page 2, followed by page 3 again, followed by page 1,
and so on and so forth.

• A good page replacement algorithm is the one with the


lowest page fault rate.

Main Memory Management


OPTIMAL ALGORITHM
• The Optimal algorithm selects the page that will
not be needed or referenced for the longest period
of time in the future.

• This algorithm produces the lowest number of page


faults among all the algorithms.

• Assume that there are 4 memory frames available


(frame 0, frame1, frame 2, and frame 3) and the
following page reference pattern:

1, 2, 3, 4, 3, 1, 4, 2, 5, 2, 6, 2, 3, 1
Main Memory Management
OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

1
Frames

Page Fault

Page Replaced

At time = 0, all four main memory frames are initially empty.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1
1
Frames

Page Fault ×
Page Replaced

At time = 1, page 1 was referenced by the process. Since the main memory is empty, the
reference results into a page fault and page 1 is brought into the main memory and placed in
frame 0.

Take note that this is an initialization page fault. These are the faults produced while the
memory frames are being filled-up.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1
1 2
Frames

Page Fault × ×
Page Replaced

At time = 2, page 2 was referenced. Since page 1 is the only page inside the main memory, this
will again lead to another initialization page fault.

So page 2 will be brought into the main memory and placed in frame 1.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1
1 2 2
Frames

2 3
3

Page Fault × × ×
Page Replaced

At time = 3, page 3 was referenced. Since it is not inside the main memory, this will generate
another initialization page fault.

Page 3 will then be placed in frame 2 of the main memory.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1
1 2 2 2
Frames

2 3 3
3 4
Page Fault × × × ×
Page Replaced

At time = 4, page 4 was referenced. Since it is not inside the main memory, this will generate
another initialization page fault so there are now four initialization faults.

Page 4 will now be loaded into frame 3 of the main memory.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3
3 4 4 4 4 4
Page Fault × × × ×
Page Replaced

During the next four time intervals (time = 5, 6, 7, and 8), pages 3, 1, 4, and 2 are referenced
respectively.

Since these pages are already inside the main memory, no page faults are generated and
execution continues normally.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3
3 4 4 4 4 4
Page Fault × × × ×
Page Replaced

At time = 9, page 5 is referenced. Since it is not inside the main memory, this reference generates
a page fault. Page 5 has to be brought into the main memory but since there are no free frames
available, a page inside the memory has to be replaced.

The choices are pages 1, 2, 3, and 4.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 3
3 4 4 4 4 4 5
Page Fault × × × × ×
Page Replaced 4

Page 1 will be used again at time = 14. Page 2 will be referenced again at time = 10 and time = 12.
Page 3 will be needed again at time = 13. Page 4 will not used again within the given time period.

So following the optimal strategy, page 4 is the one that will not be used or needed for the
longest period of time. So page 5 will be placed in frame 3, replacing page 4.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 3 3
3 4 4 4 4 4 5 5
Page Fault × × × × ×
Page Replaced 4

At time = 10, page 2 was referenced again. This is not a page fault since page 2 is still inside the
main memory.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 3 3 3
3 4 4 4 4 4 5 5 6
Page Fault × × × × × ×
Page Replaced 4 5

At time = 11, page 6 was referenced. Since it is not inside the main memory, the reference results
into a page fault. Possible choices for pages to be replaced are pages 1, 2, 3, and 5. Page 1 will
be used again at time = 14. Page 2 will be referenced again at time = 12 while page 3 at time =
13. Page 5 will not be used again within the given time period so page 6 will be placed in frame 3,
replacing page 5.

Main Memory Management


OPTIMAL ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 3 3 3 3 3 3
3 4 4 4 4 4 5 5 6 6 6 6

Page Fault × × × × × ×
Page Replaced 4 5

From time = 12 to time = 14, there are no page faults because the pages that are needed (pages
2, 3, and 1) are all inside the main memory.

So the total page faults for this reference pattern using the optimal algorithm is 6. If initialization
page faults are ignored, then there are only 2 page faults.

Main Memory Management


OPTIMAL ALGORITHM

• As mentioned earlier, the main


advantage of the optimal strategy is
that it has the lowest page fault rate
possible.

• However, it is impossible to
implement because it requires the
operating system to know what
pages will be referenced in the
future.

• The optimal strategy is often used to


compare the results of the other
algorithms since it produces the best
possible result.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM

• The First-In, First-Out (FIFO)


algorithm selects the pages to
be replaced on the first-in,
first-out basis, meaning the
first page to enter the main
memory is the first one to be
replaced.

• In other words, it selects the


oldest page in main memory
(the one which has been in
memory for the longest time).

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

1
Frames

Page Fault

Page Replaced

At time = 0, all four main memory frames are initially empty.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1
1 2 2 2
Frames

2 3 3
3 4
Page Fault × × × ×
Page Replaced

As in the optimal algorithm, the first four memory references (from time = 1 to time = 4) will
result into initialization page faults and pages 1, 2, 3, and 4 are brought into the main memory in
that order.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3
3 4 4 4 4 4
Page Fault × × × ×
Page Replaced

From time = 5 to time = 8, no page faults are generated since all the pages needed (pages 3, 1, 4,
and 2) are already in the main memory.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 5
1 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 3
3 4 4 4 4 4 4
Page Fault × × × × ×
Page Replaced 1

At time = 9, a reference to page 5 leads to a page fault.

The candidates for replacement are pages 1, 2, 3, and 4. Among them, page 1 is the oldest since
it entered the main memory at time = 1. The others were placed in the main memory after that.
So page 5 is placed in frame 0 replacing page 1.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 5 5
1 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 3 3
3 4 4 4 4 4 4 4
Page Fault × × × × ×
Page Replaced 1

At time = 10, no page fault was generated since page 2 is still in the main memory.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 5 5 5
1 2 2 2 2 2 2 2 2 2 6
Frames

2 3 3 3 3 3 3 3 3 3
3 4 4 4 4 4 4 4 4
Page Fault × × × × × ×
Page Replaced 1 2

At time = 11, page 6 was referenced so a page fault occurred.

The pages in memory that can be replaced are pages 2, 3, 4, and 5. Page 2 is the oldest since it
was placed inside the main memory at time = 2. Therefore it is replaced by page 6.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 5 5 5 5
1 2 2 2 2 2 2 2 2 2 6 6
Frames

2 3 3 3 3 3 3 3 3 3 2
3 4 4 4 4 4 4 4 4 4
Page Fault × × × × × × ×
Page Replaced 1 2 3

At time = 12, page 2 was referenced resulting in a page fault.

Possible choices for replacement are pages 3, 4, 5, and 6. Among these, page 3 is the oldest since
it entered at time = 3. So it is replaced and page 2 is placed in frame 2.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 5 5 5 5 5
1 2 2 2 2 2 2 2 2 2 6 6 6
Frames

2 3 3 3 3 3 3 3 3 3 2 2
3 4 4 4 4 4 4 4 4 4 3
Page Fault × × × × × × × ×
Page Replaced 1 2 3 4

At time = 13, page 3 was referenced therefore generating another page fault.

Among the pages in memory, page 4 is the oldest since it entered the memory at time = 4. So it
replaced by page 3.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 5 5 5 5 5 1
1 2 2 2 2 2 2 2 2 2 6 6 6 6
Frames

2 3 3 3 3 3 3 3 3 3 2 2 2
3 4 4 4 4 4 4 4 4 4 3 3
Page Fault × × × × × × × × ×
Page Replaced 1 2 3 4 5

At time = 14, page 1 was referenced and causing another page fault. Since page 5 entered the
memory at time= 9, it is the oldest so it will be replaced by page 1.

So the total page faults for the FIFO algorithm is 5 (not counting initialization page faults).

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM

• The operating system must keep track of the time a page was
brought into the main memory in order to determine the oldest
page in memory.

• An alternative to this is for the operating system to use an FIFO


queue.

Every time a page is brought inside the main memory, the


operating system records its page number in rear of this queue.
The page numbers of the pages that have been brought in earlier
will be near the front of the queue.

So if a page has to be replaced, the page whose page number is at


the front of the queue at that time is the one that is to be replaced.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM

• The main advantage of the FIFO


algorithm is its simplicity.

It is easy for the operating


system to decide which page is
to be replaced.

Therefore, the algorithm has a


relatively low overhead since the
CPU does have to do much
computation to determine the
page to be replaced.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM

• Its disadvantage is that removing the


oldest page may not be the best choice.

Many pages of a process are actually


used from the start of execution until
the end.

In other words, the FIFO algorithm has


the tendency of replacing pages that
were loaded early even though these
pages are heavily used.

The performance of this algorithm is


known to be quite poor in terms of the
number of pages faults generated.

Main Memory Management


FIRST-IN, FIRST-OUT ALGORITHM

• It has been mentioned earlier that


the more main memory frames
allocated to a process, the lower its
page fault rate will be.

• In the FIFO algorithm however, the


page fault rate of certain programs
actually increased when the
number of allocated frames was
increased.

This phenomenon is known as


Belady’s Anomaly. Take note
though that this does not happen
frequently.

Main Memory Management


LEAST RECENTLY USED ALGORITHM

• The Least Recently Used (LRU) algorithm selects the least recently
used page as the one to be replaced.

• This algorithm exploits the locality of reference principle.

• Recall that an instruction that has been executed recently is most


likely to be executed again.

• Applying this to paging or virtual memory, a page that has been


referenced recently has a very good chance of being needed again.

• Conversely, the page that has not been recently used is less likely to
be used again and is thus a good choice for the page to be
replaced.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

1
Frames

Page Fault

Page Replaced

At time = 0, all four main memory frames are initially empty.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1
1 2 2 2
Frames

2 3 3
3 4
Page Fault × × × ×
Page Replaced

As in the first two algorithms, the first four memory references (from time = 1 to time = 4) will
result into initialization page faults and pages 1, 2, 3, and 4 are brought into the main memory in
that order.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3
3 4 4 4 4 4
Page Fault × × × ×
Page Replaced

From time = 5 to time = 8, no page faults are generated since all the pages needed (pages 3, 1, 4,
and 2) are already in the main memory.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 5
3 4 4 4 4 4 4
Page Fault × × × × ×
Page Replaced 3

At time = 9, page 5 was referenced thereby causing a page fault. The candidates for replacement
are pages 1, 2, 3, and 4.

Page 1 was recently used at time = 6, page 2 at time = 8, page 3 at time = 5, and page 4 at time =
7. So among the four pages inside the main memory, page 3 is the least recently used. It is
therefore replaced by page 5.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 5 5
3 4 4 4 4 4 4 4
Page Fault × × × × ×
Page Replaced 3

At time = 10, no page fault was generated since page 2 is still in the main memory.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1 6
1 2 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 5 5 5
3 4 4 4 4 4 4 4 4
Page Fault × × × × × ×
Page Replaced 3 1

At time = 11, a reference to page 6 resulted into a page fault. Among the pages in the main
memory, page 1 was last referenced at time = 6, page 2 at time = 10, page 4 at time = 7, and page
5 at time = 9.

Since page 1 is the least recently used page, it is replaced by page 6.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1 6 6
1 2 2 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 5 5 5 5
3 4 4 4 4 4 4 4 4 4
Page Fault × × × × × ×
Page Replaced 3 1

At time = 12, no page fault was generated.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1 6 6 6
1 2 2 2 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 5 5 5 5 5
3 4 4 4 4 4 4 4 4 4 3
Page Fault × × × × × × ×
Page Replaced 3 1 4

At time = 13, page 3 was referenced thus resulting in a page fault.

Among the pages in the main memory, page 4 is the one that is the least recently used and will
thus be replaced by page 3.

Main Memory Management


LEAST RECENTLY USED ALGORITHM
• The simulation is as follows:

Time 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Page 1 2 3 4 3 1 4 2 5 2 6 2 3 1
Referenced

0 1 1 1 1 1 1 1 1 1 1 6 6 6 6
1 2 2 2 2 2 2 2 2 2 2 2 2 2
Frames

2 3 3 3 3 3 3 5 5 5 5 5 1
3 4 4 4 4 4 4 4 4 4 3 3
Page Fault × × × × × × × ×
Page Replaced 3 1 4 5

At time = 14, page 1 was referenced thus causing a page fault. Among the pages in the main
memory, page 5 is the one that is the least recently used. It will therefore be replaced by page 1.

The total number of page faults using the LRU algorithm is therefore 4 (not counting the
initialization faults).

Main Memory Management


LEAST RECENTLY USED ALGORITHM

• As in the FIFO algorithm, the


operating system has to keep
track of the time each page
was referenced.

• It can also maintain a stack


wherein the page number of
the most recently used page is
placed at the top of the stack.

• This guarantees that the page


number of the least recently
used page will always be at the
bottom of the stack.

Main Memory Management


LEAST RECENTLY USED ALGORITHM

• The main advantage of the


LRU algorithm is that it
exhibits relatively good
performance because of its
adherence to the locality of
reference principle.

• Its disadvantage is its high


overhead because of the
time spent by the CPU
updating the stack each
time a page reference is
made.

Main Memory Management

You might also like