0% found this document useful (0 votes)
58 views

Memory Management: Operating Systems: Internals and Design Principles, 6/E

The document discusses memory management techniques used in operating systems, including partitioning, paging, segmentation, and virtual memory. It covers the requirements, concepts, and algorithms involved in allocating and managing memory for processes.

Uploaded by

RockyKingston
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Memory Management: Operating Systems: Internals and Design Principles, 6/E

The document discusses memory management techniques used in operating systems, including partitioning, paging, segmentation, and virtual memory. It covers the requirements, concepts, and algorithms involved in allocating and managing memory for processes.

Uploaded by

RockyKingston
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 132

Operating Systems:

Internals and Design Principles, 6/E


William Stallings

Memory Management

Patricia Roy
Manatee Community College, Venice, FL
©2008, Prentice Hall
The need for memory
management
• Memory is cheap today, and getting
cheaper
– But applications are demanding more and
more memory, there is never enough!
• Memory Management, involves swapping
blocks of data from secondary storage.
• Memory I/O is slow compared to a CPU
– The OS must cleverly time the swapping to
maximise the CPU’s efficiency
Memory Management

Memory needs to be allocated to ensure a


reasonable supply of ready processes to
consume available processor time
Memory Management
Requirements
• Relocation
• Protection
• Sharing
• Logical organisation
• Physical organisation
Requirements: Relocation
• The programmer does not know where the
program will be placed in memory when it
is executed,
– it may be swapped to disk and return to main
memory at a different location (relocated)
• Memory references must be translated to
the actual physical memory address
Memory Management
Terms
Table 7.1 Memory Management Terms

Term Description
Frame Fixed-length block of main
memory.
Page Fixed-length block of data in
secondary memory (e.g. on disk).
Segment Variable-length block of data that
resides in secondary memory.
Addressing
Requirements: Protection
• Processes should not be able to reference
memory locations in another process
without permission
• Impossible to check absolute addresses at
compile time
• Must be checked at run time
Requirements: Sharing
• Allow several processes to access the
same portion of memory
• Better to allow each process access to the
same copy of the program rather than
have their own separate copy
Requirements: Logical
Organization
• Memory is organized linearly (usually)
• Programs are written in modules
– Modules can be written and compiled
independently
• Different degrees of protection given to
modules (read-only, execute-only)
• Share modules among processes
• Segmentation helps here
Requirements: Physical
Organization
• Cannot leave the programmer with the
responsibility to manage memory
• Memory available for a program plus its
data may be insufficient
– Overlaying allows various modules to be
assigned the same region of memory but is
time consuming to program
• Programmer does not know how much
space will be available
Partitioning
• An early method of managing memory
– Pre-virtual memory
– Not used much now
• But, it will clarify the later discussion of
virtual memory if we look first at
partitioning
– Virtual Memory has evolved from the
partitioning methods
Types of Partitioning
• Fixed Partitioning
• Dynamic Partitioning
• Simple Paging
• Simple Segmentation
• Virtual Memory Paging
• Virtual Memory Segmentation
Fixed Partitioning
• Equal-size partitions (see fig 7.3a)
– Any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap a
process out of a partition
– If none are in a ready or running
state
Fixed Partitioning Problems
• A program may not fit in a partition.
– The programmer must design the program
with overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies
an entire partition.
– This is results in internal fragmentation.
Solution – Unequal Size
Partitions
• Lessens both problems
– but doesn’t solve completely
• In Fig 7.3b,
– Programs up to 16M can be
accommodated without overlay
– Smaller programs can be placed in
smaller partitions, reducing internal
fragmentation
Placement Algorithm
• Equal-size
– Placement is trivial (no options)
• Unequal-size
– Can assign each process to the smallest
partition within which it will fit
– Queue for each partition
– Processes are assigned in such a way as to
minimize wasted memory within a partition
Fixed Partitioning
Remaining Problems with
Fixed Partitions
• The number of active processes is limited
by the system
– I.E limited by the pre-determined number of
partitions
• A large number of very small process will
not use the space efficiently
– In either fixed or variable length partition
methods
Dynamic Partitioning
• Partitions are of variable length and
number
• Process is allocated exactly as much
memory as required
Dynamic Partitioning
Example
OS (8M) • External Fragmentation
• Memory external to all
P2
P1
(14M)
(20M)
processes is fragmented
Empty (6M) • Can resolve using
Empty
P4(8M)
P2
(56M)
compaction
(14M)
Empty (6M)
– OS moves processes so
P3 that they are contiguous
(18M)
– Time consuming and
wastes CPU time
Empty (4M)

Refer to Figure 7.4


Dynamic Partitioning
• Operating system must decide which free
block to allocate to a process
• Best-fit algorithm
– Chooses the block that is closest in size to the
request
– Worst performer overall
– Since smallest block is found for process, the
smallest amount of fragmentation is left
– Memory compaction must be done more often
Dynamic Partitioning
• First-fit algorithm
– Scans memory form the beginning and
chooses the first available block that is large
enough
– Fastest
– May have many process loaded in the front
end of memory that must be searched over
when trying to find a free block
Dynamic Partitioning
• Next-fit
– Scans memory from the location of the last
placement
– More often allocate a block of memory at the
end of memory where the largest block is
found
– The largest block of memory is broken up into
smaller blocks
– Compaction is required to obtain a large block
at the end of memory
Allocation
Buddy System
• Entire space available is treated as a
single block of 2U
• If a request of size s where 2U-1 < s <= 2U
– entire block is allocated
• Otherwise block is split into two equal
buddies
– Process continues until smallest block greater
than or equal to s is generated
Example of Buddy System
Tree Representation of
Buddy System
Relocation
• When program loaded into memory the
actual (absolute) memory locations are
determined
• A process may occupy different partitions
which means different absolute memory
locations during execution
– Swapping
– Compaction
Addresses
• Logical
– Reference to a memory location independent
of the current assignment of data to memory.
• Relative
– Address expressed as a location relative to
some known point.
• Physical or Absolute
– The absolute address or actual location in
main memory.
Relocation
Registers Used during
Execution
• Base register
– Starting address for the process
• Bounds register
– Ending location of the process
• These values are set when the process is
loaded or when the process is swapped in
Registers Used during
Execution
• The value of the base register is added to
a relative address to produce an absolute
address
• The resulting address is compared with
the value in the bounds register
• If the address is not within bounds, an
interrupt is generated to the operating
system
Paging
• Partition memory into small equal fixed-
size chunks and divide each process into
the same size chunks
• The chunks of a process are called pages
• The chunks of memory are called frames
Paging
• Operating system maintains a page table
for each process
– Contains the frame location for each page in
the process
– Memory address consist of a page number
and offset within the page
Processes and Frames

A.0
A.1
A.2
A.3
D.0
B.0
D.1
B.1
D.2
B.2
C.0
C.1
C.2
C.3
D.3
D.4
Page Table
Address Translation
Page Tables
• Page tables are also stored in virtual
memory
• When a process is running, part of its
page table is in main memory
Two-Level
Hierarchical Page Table
Address Translation for
Hierarchical page table
Page tables
grow proportionally
• A drawback of the type of page tables just
discussed is that their size is proportional
to that of the virtual address space.

• An alternative is Inverted Page Tables


Inverted Page Table
Translation Lookaside
Buffer
• Each virtual memory reference can cause
two physical memory accesses
– One to fetch the page table
– One to fetch the data
• To overcome this problem a high-speed
cache is set up for page table entries
– Called a Translation Lookaside Buffer (TLB)
– Contains page table entries that have been
most recently used
TLB Operation
• Given a virtual address,
– processor examines the TLB
• If page table entry is present (TLB hit),
– the frame number is retrieved and the real
address is formed
• If page table entry is not found in the TLB
(TLB miss),
– the page number is used to index the process
page table
Looking into the
Process Page Table
• First checks if page is already in main
memory
– If not in main memory a page fault is issued
• The TLB is updated to include the new
page entry
Translation Lookaside
Buffer
TLB operation
Associative Mapping
• As the TLB only contains some of the
page table entries we cannot simply index
into the TLB based on the page number
– Each TLB entry must include the page
number as well as the complete page table
entry
• The process is able to simultaneously
query numerous TLB entries to determine
if there is a page number match
Translation Lookaside
Buffer
TLB and
Cache Operation
Page Size
• Smaller page size, less amount of internal
fragmentation
• But Smaller page size, more pages
required per process
– More pages per process means larger page
tables
• Larger page tables means large portion of
page tables in virtual memory
Page Size
• Secondary memory is designed to
efficiently transfer large blocks of data so a
large page size is better
Further complications
to Page Size
• Small page size, large number of pages
will be found in main memory
• As time goes on during execution, the
pages in memory will all contain portions
of the process near recent references.
Page faults low.
• Increased page size causes pages to
contain locations further from any recent
reference. Page faults rise.
Combined Paging and
Segmentation
Address Translation
Protection and sharing
• Segmentation lends itself to the
implementation of protection and sharing
policies.
• As each entry has a base address and
length, inadvertent memory access can be
controlled
• Sharing can be achieved by segments
referencing multiple processes
Operating system software :
Memory Management
Decisions
• Whether or not to use virtual memory
techniques
• The use of paging or segmentation or both
• The algorithms employed for various
aspects of memory management
Demand Paging
and Prepaging
• Demand paging
– only brings pages into main memory when a
reference is made to a location on the page
– Many page faults when process first started
• Prepaging
– brings in more pages than needed
– More efficient to bring in pages that reside
contiguously on the disk
– Don’t confuse with “swapping”
Replacement Policy:
Frame Locking
• Frame Locking
– If frame is locked, it may not be replaced
– Kernel of the operating system
– Key control structures
– I/O buffers
– Associate a lock bit with each frame
Basic Replacement
Algorithms
• There are certain basic algorithms that are
used for the selection of a page to replace,
they include
– Optimal
– Least recently used (LRU)
– First-in-first-out (FIFO)
– Clock
• Examples
Examples
• An example of the implementation of these
policies will use a page address stream
formed by executing the program is
–232152453252
• Which means that the first page
referenced is 2,
– the second page referenced is 3,
– And so on.
Optimal policy
• Selects for replacement that page for
which the time to the next reference is the
longest
• But Impossible to have perfect knowledge
of future events
Optimal Policy
Example

• The optimal policy produces three page


faults after the frame allocation has been
filled.
LRU Example

• The LRU policy does nearly as well as the


optimal policy.
– In this example, there are four page faults
First-in, first-out (FIFO)
• Treats page frames allocated to a process
as a circular buffer
• Pages are removed in round-robin style
– Simplest replacement policy to implement
• Page that has been in memory the longest
is replaced
– But, these pages may be needed again very
soon if it hasn’t truly fallen out of use
FIFO Example

• The FIFO policy results in six page faults.


– Note that LRU recognizes that pages 2 and 5
are referenced more frequently than other
pages, whereas FIFO does not.
Clock Policy
• Uses and additional bit called a “use bit”
• When a page is first loaded in memory or
referenced, the use bit is set to 1
• When it is time to replace a page, the OS
scans the set flipping all 1’s to 0
• The first frame encountered with the use
bit already set to 0 is replaced.
Clock Policy Example

• Note that the clock policy is adept at


protecting frames 2 and 5 from
replacement.
Clock Policy
Clock Policy
Combined Examples
Resident Set
Management
• The OS must decide how many pages to
bring into main memory
– The smaller the amount of memory allocated
to each process, the more processes that can
reside in memory.
– Small number of pages loaded increases
page faults.
– Beyond a certain size, further allocations of
pages will not affect the page fault rate.
Resident Set Size
• Fixed-allocation
– Gives a process a fixed number of pages
within which to execute
– When a page fault occurs, one of the pages of
that process must be replaced
• Variable-allocation
– Number of pages allocated to a process
varies over the lifetime of the process
Replacement Scope
• The scope of a replacement strategy can
be categorized as global or local.
– Both types are activated by a page fault when
there are no free page frames.
– A local replacement policy chooses only
among the resident pages of the process that
generated the page fault
– A global replacement policy considers all
unlocked pages in main memory
Fixed Allocation,
Local Scope
• Decide ahead of time the amount of
allocation to give a process
• If allocation is too small, there will be a
high page fault rate
• If allocation is too large there will be too
few programs in main memory
– Increased processor idle time or
– Increased swapping.
Variable Allocation, Global
Scope
• Easiest to implement
– Adopted by many operating systems
• Operating system keeps list of free frames
• Free frame is added to resident set of
process when a page fault occurs
• If no free frame, replaces one from
another process
– Therein lies the difficulty … which to replace.
Variable Allocation,
Local Scope
• When new process added, allocate
number of page frames based on
application type, program request, or other
criteria
• When page fault occurs, select page from
among the resident set of the process that
suffers the fault
• Reevaluate allocation from time to time
Resident Set
Management Summary
Cleaning Policy
• A cleaning policy is concerned with
determining when a modified page should
be written out to secondary memory.
• Demand cleaning
– A page is written out only when it has been
selected for replacement
• Precleaning
– Pages are written out in batches
Cleaning Policy
• Best approach uses page buffering
• Replaced pages are placed in two lists
– Modified and unmodified
• Pages in the modified list are periodically
written out in batches
• Pages in the unmodified list are either
reclaimed if referenced again or lost when
its frame is assigned to another page
Load Control
• Determines the number of processes that
will be resident in main memory
– The multiprogramming level
• Too few processes, many occasions when
all processes will be blocked and much
time will be spent in swapping
• Too many processes will lead to thrashing
Multiprogramming
Process Suspension
• If the degree of multiprogramming is to be
reduced, one or more of the currently
resident processes must be suspended
(swapped out).
• Six possibilities exist…
Suspension policies
• Lowest priority process
• Faulting process
– This process does not have its working set in
main memory so it will be blocked anyway
• Last process activated
– This process is least likely to have its working
set resident
Suspension policies cont.
• Process with smallest resident set
– This process requires the least future effort to
reload
• Largest process
– Obtains the most free frames
• Process with the largest remaining
execution window
Linux
Memory Management
• Shares many characteristics with Unix
– But is quite complex
• Two main aspects
– Process virtual memory, and
– Kernel memory allocation.
Linux
Memory Management
• Page directory
• Page middle directory
• Page table
Linux Virtual Memory
• Three level page table structure
– Each table is the size of one page
• Page directory
– Each process has one page directory
– 1 page in size, must be in main memory
• Page middle directory:
– May be multiple pages, each entry points to
one page in the page table
Linux Memory cont
• Page table
– May also span multiple pages.
– Each page table entry refers to one virtual
page of the process.
Address Translation
Page Replacement
• Based on the clock algorithm
• The “use bit” is replace with an 8-bit age
variable
– Incremented with each page access
• Periodically decrements the age bits
– Any page with an age of 0 is “old” and is a
candidate for replacement
• A form of Least Frequently Used policy
Windows
Memory Management
• The Windows virtual memory manager
controls how memory is allocated and how
paging is performed.
• Designed to operate over a variety of
platforms
– uses page sizes ranging from 4 Kbytes to 64
Kbytes.
Windows Virtual
Address Map
• On 32 bit platforms each user process
sees a separate 32 bit address space
– Allowing 4G per process
• Some reserved for the OS,
– Typically each user process has 32G of
available virtual address space
– With all processes sharing the same 2G
system space
32 bit Windows
Address Space
Windows Paging
• On creation, a process can make use of
the entire user space of almost 2 Gbytes.
• This space is divided into fixed-size pages
managed in contiguous regions allocated
on 64Kbyte boundaries
• Regions may be in one of three states
– Available
– Reserved
– Committed

You might also like