0% found this document useful (0 votes)
3 views

Lecture 6 After Mid

The document discusses memory management concepts, including the hierarchy of memory, the role of the CPU in executing programs, and various techniques for managing memory allocation. It covers address binding, logical versus physical address spaces, and methods such as dynamic loading, contiguous and non-contiguous memory allocation, and swapping. The goal of memory management is to ensure efficient and secure use of memory while optimizing CPU utilization.

Uploaded by

Sajid Gill
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 6 After Mid

The document discusses memory management concepts, including the hierarchy of memory, the role of the CPU in executing programs, and various techniques for managing memory allocation. It covers address binding, logical versus physical address spaces, and methods such as dynamic loading, contiguous and non-contiguous memory allocation, and swapping. The goal of memory management is to ensure efficient and secure use of memory while optimizing CPU utilization.

Uploaded by

Sajid Gill
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Memory Management

Basic Concept
• CPU executes the program
• Directly connected with registers and cache
memory
• Also connected with RAM
• And there is a secondary memory.
• Secondary is not connected directly with CPU as
it slower.
• And to execute a program we have to bring
them in Ram for execution from secondary.
Basic Concepts
• Selection of memory-management method for a specific
system depends on many factors especially on the hardware
design of the system. Recent designs have integrated the
hardware and operating system.
• The CPU fetches instructions from memory according to the
value of its program counter and other memory management
registers such as segment registers in Intel CPUs.
• A typical instruction-execution cycle, e.g., first fetches an
instruction from memory, which is then decoded and
executed. Operands may have to be fetched from memory.
After the instruction has been executed, the results are stored
back in memory.
• Increasing the RAM size or cache and registers increase cost
of the system
Memory Hierarchy
• The memory hierarchy includes:
• Very small, extremely fast, extremely expensive, and volatile
CPU registers
• Small, very fast, expensive, and volatile cache
• Hundreds of megabytes of medium-speed, medium-price,
volatile main memory
• Hundreds of gigabytes of slow, cheap, and non-volatile
secondary storage
• Hundreds and thousands of terabytes of very slow, almost free,
and non-volatile
Internet storage (Web pages, Ftp repositories, etc.)
Memory Management
• The purpose of memory management is to ensure
• fair,
• secure,
• orderly, and
• efficient use of memory.
• The task of memory management includes keeping track of used and free
memory space, as well as when, where, and how much memory to allocate and
de-allocate.
• It is also responsible for swapping processes in and out of main memory.
• Degree of Multi-programming.
• When processes are brought from secondary storage to RAM bring more
processes to enhance CPU utilization.
• A process will use either CPU or I/O unit and if only one process is in memory
and it needs to perform I/O that time CPU will sit idle and efficiency will be drop.
Source to Execution

• Compile/Assemble

• Link

• Load

• Execute
Address Binding
• Usually a program resides on a disk as a binary executable
or script file. The program must be brought into the
memory it to be executed. The collection of processes that
is waiting on the disk to be brought into the memory for
execution forms the input queue.
• The normal procedure is to select one of the processes in
the input queue and to load that process into the memory.
As the process is executed, it accesses instructions and data
from memory. Eventually the process terminates and its
memory space is become available for reuse.
Address Binding
• Compile time: if you know at compile where the process will
reside in memory, the absolute addresses can be assigned to
instructions and data by the compiler.
• ƒ Load time: if it is not known at compile time where the
process will reside in memory, then the compiler must
generate re-locatable code. In this case the final binding is
delayed until load time.
• ƒ Execution time: if the process can be moved during its
execution from one memory segment to another, then
binding must be delayed until run time. Special hardware
must be available for this to work.
Logical- Versus Physical-Address Space

• An address generated by the CPU is commonly referred to as a


logical address,
• where as an address seen by the memory unit–that is, the one
loaded into the memory-address register of the memory–is
commonly referred to as the physical address.
• The set of all logical addresses generated by a program form the
logical address space of a process; the set of all physical addresses
corresponding to these logical addresses is a physical address
space of the process.
• The total size of physical address space in a system is equal to the
size of its main memory.
Memory Management Unit (MMU).
• The run-time mapping from virtual to physical
addresses is done by a piece of hardware in
the CPU, called the memory management unit
(MMU).
Translation Examples
• In the following two diagrams, we show two simple ways of
translating logical addresses into physical address. In both
case, there is a “base” register which is loaded with the
address of the first byte in the program (instruction or data
—in case of the second example, separate registers are
used to point to the beginning of code, data, and stack
portions of a program). In the first case, the base register is
called the relocation register. The logical address is
translated into the corresponding physical address by
adding the logical address to the value of the relocation
register, as shown below.
• In i8086, the logical
address of the next
instruction is specified by
the value of
instruction pointer (IP).
The physical address for
the instruction is
computed by shifting the
code segment register (CS)
left by four bits and adding
IP to it, as shown below.
• In the following example, we show the logical
address for a program instruction and
computation of physical address for the given
logical address.
ƒ Logical address (16-bit)
• IP = 0B10h CS = D000h
ƒ Physical address (20-bit)
• CS * 24 + IP = D0B10h
Various techniques for memory
management
• Here are some techniques of memory
management, which are used in addition to
the main techniques of memory management
such as paging and segmentation discussed
later in the course.
Dynamic Loading
• The size of a process is limited to the size of physical memory. To obtain
better memory space utilization, we can use dynamic loading.
• Example of word processor, error handling routine, software specific tasks
• With dynamic loading, a routine is not loaded until it is called.
• All routines are kept on a disk in a re-locatable format. The main program
is loaded into memory and is executed.
• When a routine needs to call another routine, the calling routine first
checks to see whether the other routine has been loaded or not.
• If not, the re-locatable linking loader is called to load the desired routine
into the memory and to update the program’s address tables to reflect
this change.
• The control is then passed to the newly loaded routine.
Advantages and disadvantages of dynamic
loading
• The advantage of dynamic loading is that an
unused routine is never loaded.
• This means that potentially less time is needed
to load a program and less memory space is
required.
• However the run time activity involved in
dynamic loading is a disadvantage.
• Re-locateable
Memory Management techniques
• In contiguous memory allocation, all the available
memory space remain together in one place. It means
freely available memory partitions are not scattered
here and there across the whole memory space.
• In the contiguous memory allocation, both the
operating system and the user must reside in the main
memory. The main memory is divided into two
portions one portion is for the operating and other is
for the user program.
• Two categories
Fixed sized partition(Static)

• Fixed sized partition(Multiprogramming with Fixed Tasks


(MFT)

• In the fixed sized partition the system divides memory into


fixed size partition (may or may not be of the same size)
• Thus the degree of multiprogramming is bound by the
number of partitions. In this multiple partition method,
when a partition is free, a process is selected from the input
queue and is loaded in the free partition. When the process
terminates, the partition becomes available for another
process.
Variable size partition(Dynamic)

• Variable size partition(Multiprogramming with


Variable Tasks (MVT)

• In the variable size partition, the memory is treated


as one unit and space allocated to a process is
exactly the same as required and the leftover space
can be reused again.
• This is the generalization of the fixed partition
scheme. It is used primarily in a batch environment.
Non-contiguous memory allocation

• In the non-contiguous memory allocation the available free


memory space are scattered here and there and all the free
memory space is not at one place. So this is time-consuming. In
the non-contiguous memory allocation, a process will acquire
the memory space but it is not at one place it is at the different
locations according to the process requirement. This technique
of non-contiguous memory allocation reduces the wastage of
memory which leads to internal and external fragmentation.
• Non-contiguous memory allocation is of different types,
• Paging
• Segmentation
• Segmentation with paging
Overlays
• To enable a process to be larger than the
amount of memory allocated to it, we can use
overlays. The idea of overlays is to keep in
memory only those instructions and data that
are needed at any given time. When other
instructions are needed, they are loaded into
space occupied previously by instructions that
are no longer needed.
Overlays
• Used in embedded system (Fixed functionality)
• 2-Pass assembler/compiler
ƒ Available main memory: 150k
ƒ Code size: 230k
• Pass 1 ……………….. 80k
• Pass 2 ……………….. 90k
• Common routines …... 30k
• Symbol table ………… 20k
10k for swapping driver
Continued…
• Common routines, symbol table, overlay driver,
and Pass 1 code are loaded into the main
memory for the program execution to start.
• When Pass 1 has finished its work, Pass 2 code is
loaded on top of the Pass 1 code (because this
code is not needed anymore). This way, we can
execute a 200K process in a 150K memory.
Pictorial Representation
Swapping
• A process needs to be in the memory to be executed. A
process, however, can be
swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution.
• Example of Round Robin with fixed time-quantum.
• When a quantum expires, the memory manager will start to
swap out the process that just finished, and to swap in
another process to the memory space that has been freed.

You might also like