0% found this document useful (0 votes)
51 views

Memory Management .... by NFLPVM

The document discusses memory management techniques used in operating systems, including: - Early schemes used fixed or dynamic partitions which led to fragmentation issues - Paged memory introduced page frames to allow non-contiguous memory allocation and support virtual memory - Demand paging loads only active parts of programs into memory, introducing page faults and potential thrashing - Page replacement algorithms like FIFO, LRU, and clock-based approaches aim to reduce page faults by tracking recently or frequently used pages

Uploaded by

NAUFAL.P.M
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Memory Management .... by NFLPVM

The document discusses memory management techniques used in operating systems, including: - Early schemes used fixed or dynamic partitions which led to fragmentation issues - Paged memory introduced page frames to allow non-contiguous memory allocation and support virtual memory - Demand paging loads only active parts of programs into memory, introducing page faults and potential thrashing - Page replacement algorithms like FIFO, LRU, and clock-based approaches aim to reduce page faults by tracking recently or frequently used pages

Uploaded by

NAUFAL.P.M
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Memory Management

Operating Sytems
Lecture 4
Today
• We’ll try to finish most of memory
management today
• EXAM two weeks from today
– Material covered only up through today
• Lab before exam will be a study session
for the exam
Early memory management
schemes
• Originally used to devote computer to
single user:

User has all of memory

0 65535
Limitations of single-user
contiguous scheme
• Only one person using the machine--lots of
computer time going to waste (why?)
• Largest job based on size of machine
memory
Next: fixed partitions
• Created chunks of memory for each job:

Job 1 Job 2 Job 3

0 65535
Limitations of fixed partitions
• Operator had to correctly guess size of
programs
• Programs limited to partitions they were
given
• Memory fragmentation resulted
• The kind illustrated here is called internal
memory fragmentation
Dynamic Partitions

1 1

6
2

3 5

4 7
Internal versus external memory
fragmentation:
Space currently allocated by Job 8

Job 8

Space previously allocated by Job 1


Dynamic Partitions
• Contiguous memory is still required for
processes
• How do we decide size of the partitions?
• Once the machine is going, how do old
jobs get replaced by new ones?
Dynamic Partitions: First Fit
• In this scheme, we search forward in the
“free list” for a partition large enough to
accommodate the next job
• Fast, but the gaps left can be large
Dynamic Partitions: Best Fit
• In this scheme, we try to find the smallest
partition large enough to hold the next job
• This tends to minimize the size of the gaps
• But it also requires that we keep list of free
spaces
Deallocating memory
• If the block we are deallocating is adjacent
to one or two free blocks, then it needs to
be merged with them.
• So either we are returning a pointer to the
free block, or we are changing the size of a
block, or both
Relocatable Dynamic Partitions
• We can see that in some cases, a job can “fit”
into the combined spaces within or between
partitions of the early schemes
• So how do we take advantage of that space?
• One way is to move programs while they are in
the machine--compacting them down into the
lower end of memory above the operating system
Several names for this
• Garbage collection
• Defragmentation
• Compaction

• All share a problem: relative addressing!


Special registers
• Early machine architectures went through
a phase of “more complexity is better”
• Few registers, but large sets of commands
• RISC computers have changed all of that
(for architectural reasons we won’t go into
in detail)
But...
• In order to manage dynamic relocatable
partitions, two registers were assigned to
each partition
• Note: jobs were not broken into parts, but
were relocated whole hog
• Note that the whole job is stored in
memory--still no heavy use of external
storage devices
Summary of early schemes
• These were adequate for batch jobs
• But speed of components continued to
advance
• Storage devices appear
• And then the biggie: remote terminals
• Note that processor speeds double about
every 1.5 years--much faster than memory!
To accommodate changes
• We really need to be able to extend memory by
hiding unused parts of programs on storage
devices--called virtual memory
• We can do this if we can break a program into
pieces--called pages--that can be independently
loaded and removed from memory without
affecting the running program
Paged Memory
• Allows jobs to reside in noncontiguous
memory
• More jobs can be squeezed into memory
• But—some internal fragmentation,
particularly if the number of jobs is large
Disk partitions to page frames
• Disk partitions varied in size before
• Now we want to fix the size to match the
chunk size of the programs coming in (plus
a tiny bit of bookkeeping space)
• We call these chunks of memory page
frames
Is there any fragmentation?
• We can see that if page frames are chosen
right, we shouldn’t have external
fragmentation
• Seldom will the size of code exactly fit the
frames, so there will be a little bit of
internal fragmentation
• Tradeoff between internal fragmentation
and processor time spent managing frames
Other problems
• Earliest schemes didn’t allow virtual
memory
• Tables for managing memory a significant
part of the operating system--its growing!
Demand Paging
• Demand paging broke jobs into pieces and
became popular because it allowed only
parts of jobs to be active in memory
• New problems: thrashing and page faults
Page Replacement Algorithms
• Optimal page replacement simply not
possible
• Keep referenced (R) and Modify (M) bits
to allow us to keep track of past usage
instead
– Page is referenced by any read or write in it
– Page is modified by any change (write) made
to it
Page Replacement Algorithms,
Continued
• FIFO = First in, first out
• LRU = Least recently used
• LFU = Least frequently used
• both of the latter rely on a page request
call to the operating system
• a failure to find a page = page interrupt
• we might measure quality by
failure rate = page interrupts / page requests
Page Replacement Algorithms,
Continued
• Clock page replacement
– Hand of the clock points to the oldest page
– If a page fault occurs, check R bits in
“clockwise” order
• A variant called the “two-handed clock” is
used in some UNIX systems
FIFO solution is not more
memory
• Called Belady’s anomaly
• the page request order is an important
factor, not just the size of memory
LRU
• Doesn’t suffer from Belady’s anomaly
• Presumes locality of reference
• But while it works well, it is a little more
complex to implement in software
– Consequently, aging and various clock
algorithms are the most common in practice
– Aging can yield a good approximation
Segmented Memory Allocation
• Instead of equal divisions, try to break
code into its natural modules
• Compiler now asked to help operating
system
• No page frames--different sizes required
(meaning we get external fragmentation
again)
Segmented/Demand Paging
• Subdivide the natural program segments
into equal sized parts to load into page
frames
• eliminates external fragmentation
• allows for large virtual memory, so it is
often used in more modern OS’s
Tradeoffs
• Note that there is a tradeoff between
external fragmentation and page faults in
paging systems
• Note also that we probably want slightly
smaller page frames in a Segmented-
Demand Paging framework

You might also like