0% found this document useful (0 votes)
12 views47 pages

CHAP5 Memory management (Cairo Univ)

Memory management is a critical function of the operating system, ensuring efficient use of main memory by managing the allocation and swapping of processes. It must satisfy requirements such as relocation, protection, sharing, logical organization, and physical organization, while addressing challenges like fragmentation. Techniques like fixed and dynamic partitioning, paging, and segmentation are employed to optimize memory usage and maintain system performance.

Uploaded by

simmabibo113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views47 pages

CHAP5 Memory management (Cairo Univ)

Memory management is a critical function of the operating system, ensuring efficient use of main memory by managing the allocation and swapping of processes. It must satisfy requirements such as relocation, protection, sharing, logical organization, and physical organization, while addressing challenges like fragmentation. Techniques like fixed and dynamic partitioning, paging, and segmentation are employed to optimize memory usage and maintain system performance.

Uploaded by

simmabibo113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Memory Management

Chapter 7
By: Hatem Moharram
Memory Management
• there is never enough main memory to hold all of the
programs and data structures needed by active
processes and by the operating system.

• a central task of the operating system is to manage


memory, which involves bringing in and swapping out
blocks of data from secondary memory.

• memory I/O is a slow operation.

• To keep the processor or processors busy and thus to


maintain efficiency, the OS must decrease the time of
swapping in and swapping out to minimize the effect of
memory I/O on performance.
In a uniprogramming system, main memory is divided into
two parts: one part for the
operating system (resident monitor, kernel) and one part
for the program currently being executed.

In a multiprogramming system, the “user” part of memory


must be further subdivided to accommodate multiple
processes.

The task of subdivision is carried out dynamically by the


operating system and is known as memory management.
If only a few processes are in memory, then for much of the
time all of the processes will be waiting for I/O and the
processor will be idle.

Thus memory needs to be allocated to ensure a reasonable


supply of ready processes to consume available processor
time.
7.1 MEMORY MANAGEMENT
REQUIREMENTS
five requirements memory management is
intended to satisfy:

1. Relocation
2. Protection
3. Sharing
4. Logical organization
5. Physical organization
7.1 MEMORY MANAGEMENT REQUIREMENTS

1- Relocation
Programmer does not know where the program will be
placed in memory when it is executed.
The OS should be able to swap active processes in and out
of main memory to maximize processor utilization by
providing a large pool of ready processes to execute.
While the program is executing, it may be swapped to disk
and returned to main memory at a different location
(relocated)
some technical concerns related to addressing.
Memory references must be translated in the code to
actual physical memory address
the processor must deal with
memory references within the
program.

Branch instructions contain an


address to reference the instruction
to be executed next.

Data reference instructions contain


the address of the byte or word of
data referenced.
the processor HW and the OS must
be able to translate the memory
references found in the program
into actual physical addresses.
7.1 MEMORY MANAGEMENT REQUIREMENTS
2- Protection
Each process should be protected against unwanted
interference by other processes, whether accidental or
intentional.

• satisfaction of the relocation requirement increases the


difficulty of satisfying the protection requirement.

• it is impossible to check absolute addresses at compile


time to assure protection since the program could be
relocated.

• all memory references generated by a process must be


checked at run time to ensure that they refer only to the
memory space allocated to that process.
7.1 MEMORY MANAGEMENT REQUIREMENTS

a user process cannot access the following:

1- any portion of the operating system, neither program


nor data.

2- an instruction in another process (branch to).

3- the data area of another process (without special


arrangement) .

The processor (hardware) must be able to abort such


instructions at the point of execution.
7.1 MEMORY MANAGEMENT REQUIREMENTS
3- Sharing
•Allow several processes to access the same portion of
memory

•Better to allow each process (person) access to the


same copy of the program rather than have their own
separate copy.

•The memory management system must allow


controlled access to shared areas of memory without
compromising essential protection.
7.1 MEMORY MANAGEMENT REQUIREMENTS
4- Logical Organization
• main memory and secondary memory is organized as a
linear address space, consisting of a sequence of bytes or
words.
• While programs are organized into modules.
• If the OS and computer HW can deal with programs
and data in the form of modules, then a number of
advantages can be realized:
– Modules can be written and compiled independently
– Different degrees of protection given to modules (read-
only, execute-only)
– Share modules
The tool that satisfies these requirements is segmentation
7.1 MEMORY MANAGEMENT REQUIREMENTS
5- Physical Organization
• computer memory is organized into at least two
levels:
1- main memory 2- secondary memory.
1- Main memory: faster, expencive, and volatile.
2-Secondary memory: slower, cheaper, larger
capacity, and not volatile.

Thus secondary memory can be provided for long-


term storage of programs and data,
main memory holds programs and data currently in
use.
the organization of the flow of information between main
and secondary memory is a major system concern.

The responsibility for this flow could not be assigned to the


individual programmer for two reasons:

1- The main memory available for a program may be


insufficient. In that case, the programmer must use
overlaying, in which various modules can be assigned the
same region of memory, with a main program responsible
for switching the modules in and out as needed. Even with
the aid of compiler tools, overlay programming wastes
programmer time.
2. In a multiprogramming environment, the programmer
does not know at the time of coding how much space will
be available or where that space will be.
7.2 MEMORY PARTITIONING
1- Fixed Partitioning
the OS occupies some fixed portion of MM and that the
rest of MM is available for use by multiple processes.

• two alternatives for fixed partitioning exists:


A] equal-size partitions
B] unequal-size partitions
A] Equal-size partitions
• any process whose size is less than or equal to the
partition size can be loaded into an available partition

• if all partitions are full, the OS can swap a process out of


a partition.

•two difficulties with the use of equal-size fixed partitions:


- A program may not fit in a partition. The programmer
must design the program with overlays.

- Main memory utilization is extremely inefficient. Any


program, no matter how small, occupies an entire
partition. This is called internal fragmentation.
B] Unequal-size partitions
all partitions are of unequal size.

can assign each process to the smallest partition within


which it will fit

Still there are a possibility for internal fragmentation


Placement Algorithm with Partitions
Equal-size partitions
because all partitions are of equal size, it does not
matter which partition is used
• Unequal-size partitions
queue for each partition: processes
are assigned to minimize wasted
memory within a partition. This is
optimum from the point of view of
an individual partition, but it is not
optimum from the point of view of
the system as a whole.
Thus, a preferable approach would be to
employ a single queue for all processes.

the smallest available partition that will hold


the process is selected.
If all partitions are occupied, then a
swapping decision must be made.

swapping out decision can be taking


according to:
- the smallest partition that will hold the
incoming process.
- priority
- Blocked processes versus ready processes.
there are disadvantages for fixed partition:
• The number of partitions specified at system generation
time limits the number of active (not suspended) processes
in the system.

• Because partition sizes are preset at system generation


time, small jobs will not utilize partition space efficiently.
2- Dynamic Partitioning
the partitions are of variable length and number.
When a process is brought into MM, it is allocated exactly
as much memory as it requires and no more.
2- Dynamic Partitioning
• this method leads to a situation in which there are a lot
of small holes in memory. This phenomenon is referred
to as external fragmentation.

• One technique for overcoming external fragmentation is


compaction.

• From time to time, the OS shifts the processes so that


they are contiguous and so that all of the free memory is
together in one block.

• The difficulty with compaction is that it is a time


consuming procedure and wasteful of processor time.
Dynamic Partitioning Placement Algorithm
• Operating system must decide which free block to
allocate to a process

• Three placement algorithms that might be


considered are:
1- best-fit,
2- first-fit,
3- next-fit
Dynamic Partitioning Placement Algorithm
1- Best-fit algorithm
Chooses the block that is closest in size to the request.

Worst performer overall.

Since smallest block is found for process, the smallest


amount of fragmentation is left .

memory compaction must be done more often


Dynamic Partitioning Placement Algorithm

2- First-fit algorithm
begins to scan memory from the beginning and chooses
the first available block that is large enough.

– the simplest

– usually the best and fastest as well.

– May have many process loaded in the front end of


memory that must be searched over when trying to
find a free block
Dynamic Partitioning Placement Algorithm

3- Next-fit
Begins to scan memory from the location of the last
placement, and chooses the next available block that is
large enough.

– More often allocate a block of memory at the end of


memory where the largest block is found

– The largest block of memory is broken up into


smaller blocks

– Compaction is required to obtain a large block at the


end of memory
16-Mbyte
allocation
request
Replacement Algorithm
the OS swaps one of the processes out of main memory to
make room for a new process or for a process in a Ready-
Suspend state.

the OS must choose which process to replace.

This will be covered later in some detail with respect to


various virtual memory schemes.
Relocation
• When a process image is first created, it is loaded into
some partition in main memory. Later, the process may
be swapped out; when it is subsequently swapped back
in, it may be assigned to a different partition than the
last time.
• the locations (of instructions and data) referenced by a
process are not fixed. They will change each time a
process is swapped in or shifted.

several types of addresses exists:


logical address
relative address
physical address
Addresses
• Logical
– reference to a memory location independent of the
current assignment of data to memory
– translation must be made to the physical address
before the memory access can be achieved.

• Relative
– is a particular example of logical address
– address expressed as a location relative to some
known point

• Physical
– the absolute address or actual location in main
memory
a hardware mechanism is needed for translating relative
addresses to physical main memory addresses at the time
of execution of the instruction that contains the reference.
Registers Used during Execution
• Base register
– starting address for the process

• Bounds register
– ending location of the process

• These values are set when the process is loaded and


when the process is swapped in
Registers Used during Execution

• The value of the base register is added to a


relative address to produce an absolute address

• The resulting address is compared with the value


in the bounds register

• If the address is not within bounds, an interrupt


is generated to the operating system
7.3 PAGING
• Partition memory into
small equal-size pages
and divide each process
into the same size page
frames (frames).

• the wasted space is due to


internal fragmentation
consisting of only a
fraction of the last page
of a process.
7.3 PAGING
•OS maintains a page table for each process contains the
frame location for each page in the process.
A page table contains one entry for each page of the
process.
Each page table entry contains the number of the frame in
main memory, if any, that holds the corresponding page.
•Within the program, each logical address consists of a
page number and offset within the page.
the processor uses the page table to produce a physical
address (frame number, offset) OS maintains a single free-
frame list of all frames in main memory that are currently
unoccupied and available for pages.
Page Tables for Example
The differences between fixed partitioning and paging:

with paging, the partitions are rather small

with paging, a program may occupy more than one


partition

partitions need not be contiguous.


The following steps are for address translation:

1. Extract the page number as the leftmost n bits of the


logical address.

2. Use the page number as an index into the process page


table to find the frame number, k.

3. The starting physical address of the frame is k × 2m,


and the physical address of the referenced byte is that
number plus the offset.
7.4 SEGMENTATION
• the program and its associated data are divided into a
number of segments.
• All segments of all programs do not have to be of the
same length
• There is a maximum segment length
• Addressing consist of two parts - a segment number and
an offset
• Since segments are not equal, segmentation is similar to
dynamic partitioning
The difference, compared to dynamic partitioning, is that
with segmentation a program may occupy more than one
partition, and these partitions need not be contiguous.

Segmentation eliminates internal fragmentation but, like


dynamic partitioning, it suffers from external
fragmentation.

because a process is broken up into a number of smaller


pieces, the external fragmentation should be less.

Like paging, a simple segmentation scheme would make


use of a segment table for each process and a list of free
blocks of main memory.
Each segment table entry would have to give the starting
address in main memory of the corresponding segment.

The entry also provide the length of the segment, to assure


that invalid addresses are not used.
address translation:
• Extract the segment number as the leftmost n bits of the
logical address.

• Use the segment number as an index into the process


segment table to find the starting physical address of the
segment.

• Compare the offset, expressed in the rightmost m bits, to


the length of the segment.
If the offset is greater than or equal to the length, the
address is invalid.

• The desired physical address is the sum of the starting


physical address of the segment plus the offset.

You might also like