Dlca 22itc01 Unit 5
Dlca 22itc01 Unit 5
FACULTY: E RAMALAKSHMI
UNIT-5
Memory Organization
1. Introduction
Volatile Memory: This loses its data, when power is switched off.
Non-Volatile Memory: This is a permanent storage and does not lose any
data when power is switched off.
2. Memory Hierarchy
Auxillary memory access time is generally 1000 times that of the main
memory, hence it is at the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to
communicate directly with the CPU and with auxiliary memory devices
through Input/ Output processor.
When the program not residing in main memory is needed by the CPU, they
are brought in from auxiliary memory. Programs not currently needed in
main memory are transferred back into auxiliary memory.
The cache memory is used to store program data which is currently being
executed in the CPU. Approximate access time ratio between cache memory
and main memory is about 1 to 7~10. i.e Cache memory is about 10 times
faster than main memory.
4. Main Memory
The memory unit that communicates directly with the CPU, Auxillary
memory and Cache memory, is called the main memory. It is the central
storage unit of the computer system. It is a large and fast memory used to
store data during computer operations. Main memory is made up of larger
sized RAM and relatively lower sized ROM.
5. Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For
example, Magnetic disks and tapes are commonly used auxiliary devices.
Other devices used as auxiliary memory are magnetic drums and optical
disks. It is not directly accessible to the CPU, and is accessed using the
Input/Output Processor.
6. Associative Memory
Since the entire chip can be compared at once, contents are randomly
stored without considering addressing scheme. These chips have less
storage capacity than regular memory chips.
The key register (K) provides a mask for choosing a particular field or key in
the argument word. If the key register contains a binary value of all 1's,
then the entire argument is compared with each memory word. Otherwise,
only those bits in the argument that have 1's in their corresponding
position of the key register are compared.
The following diagram can represent the relation between the memory array
and the external registers in an associative memory
Associative Memory of ‘m’ words and ‘n’ cells per word
The cells present inside the memory array are marked by the letter C with
two subscripts. The first subscript gives the word number and the second
specifies the bit position in the word. For instance, the cell Cij is the cell for
bit j in word i.
A bit Aj in the argument register is compared with all the bits in column j of
the array provided that Kj = 1. This process is done for all columns j = 1, 2,
3......, n. If a match occurs between all the unmasked bits of the argument
and the bits in word i, the corresponding bit Mi in the match register is set
to 1. If one or more unmasked bits of the argument and the word do not
match, Mi is cleared to 0.
The data or contents of the main memory that are used again and again by
CPU, are stored in the cache memory so that we can easily access that data
in shorter time. Cache Memory uses the Principle of Locality of Reference
Average execution time of the program can be reduced by placing this active
portion of program and data in faster cache memory.
Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory then the CPU moves onto
the main memory. It also transfers the block of recent data into the cache
and keeps on deleting the old data in cache to accommodate new one
When the CPU refers to memory and finds the word in cache it is called as a
hit. If the word is not found in cache, it is in main memory then it is known
as a miss.
Where,
h- Probability of a hit
There are three different types of mapping used for the purpose of cache
memory:
Direct mapping
Associative mapping
Set-Associative mapping.
The simplest way to store Main Memory blocks in Cache Memory is Direct
Mapping technique
In this block J of the main memory maps on to block J modulo 128 of the
cache (Line). Thus main memory blocks 0,128, 256,….is loaded into cache
at block 0. Block 1,129,257,….are stored at block 1 and so on
When new block enters the cache, the 7-bit cache block field determines the
cache/Line positions in which this block must be stored.
The higher order 5-bits of the Main memory address are stored in 5 tag bits
associated with its location in cache. They identify which of the 32 blocks
that are mapped into this cache position are currently resident in the cache
This is more flexible mapping method, in which any main memory block
can be placed into any cache block position (Line).
The tag bits of an address received from the processor are compared to the
tag bits of each block of the cache to see, if the desired block is present.
This is known as Associative Mapping technique
Set-Associated Mapping:-
Cache blocks are grouped into sets and mapping allows block of main
memory reside into any block of a specific set. Hence contention problem of
direct mapping is reduced. Also, hardware cost is reduced by decreasing the
size of associative search.
For a cache with two blocks per set. In this case, memory block 0, 64,
128,…..,4032 map into cache set 0 and they can occupy any two blocks of
the cache within this set.
Having 64 sets means that the 6 bit set field of the address determines
which set of the cache might contain the desired block. The tag bits of
address must be associatively compared to the tags of the two blocks of the
set to check if desired block is present. This is two way associative search
9. Virtual Memory
Suppose that the computer has available auxiliary memory for storing 220 =
1024K words. Auxiliary memory =1024K = 220 =25 *215 = 32 main
memories. Denoting the address space by ‘N’ and the memory space by M,
we then have N = 1024K and M = 32K.
Address Mapping Using Pages:
The physical memory is broken down into groups of equal size called blocks,
The mapping from address space to memory space is facilitated if each virtual
address is considered to be represented by two numbers: a page number address
and a line within the page. In a computer with '1! words per page, p bits are used
to specify a line address and the remaining high-order bits of the virtual address
specify the page number. In the example of Fig. 12-18, a virtual address has 13
bits. Since each page consists of 2 10 = 1024 words, the higher order three bits of a
virtual address will specify one of the eight pages and the low-order 10 bits give the
line address within the page. Note that the line address in address space and
memory space is the same; the only mapping required is from a page number to a
block number.
The organization of the memory mapping table in a paged system is shown in the
following Fig
The table shows that pages 1, 2, 5, and 6 are now available in main memory in
blocks 3, 0, 1, and 2, respectively.
A presence bit in each location indicates whether the page has been transferred
from auxiliary memory into main memory.
A ‘0’ in the presence bit indicates that this page is not available in main memory.
The CPU references a word in memory with a virtual address of 13 bits. The three
high-order bits of the virtual address specify a page number and also an address
for the memory-page table. The content of the word in the memory page table at
the page number address is read out into the memory table buffer register.
If the presence bit is a 1, the block number thus read is transferred to the two
high-order bits of the main memory address register.
The line number from the virtual address is transferred into the 10 low-order bits
of the memory address register. A read signal to main memory transfers the
content of the word to the main memory buffer register ready to be used by the
CPU. If the presence bit in the word read from the page table is 0, it signifies that
the content of the word referenced by the virtual address does not reside in main
memory. A call to the operating system is then generated to fetch the required page
from auxiliary memory and place it into main memory before resuming
computation
A more efficient way to organize the page table would be to construct it with a
number of words equal to the number of blocks in main memory. In this way the
size of the memory is reduced and each location is fully utilized.
Each entry in the associative memory array consists of two fields. The first three
bits specify a field for storing the page number. The last two bits constitute a field
for storing the block number. The virtual address is placed in the argument
register. The page number bits in the argument register are compared with all page
numbers in the page field of the associative memory. If the page number is found,
the 5-bit word is read out from memory. The corresponding block number, being in
the same word, is transferred to the main memory address register. If no match
occurs, a call to the operating system is generated to bring the required page from
auxiliary memory.
Page Replacement
which page in main memory ought to be removed to make room for a new
page,
when a new page is to be transferred from auxiliary memory to main
memory, and
where the page is to be placed in main memory.
When a program starts execution, one or more pages are transferred into
main memory and the page table is set to indicate their position. The
program is executed from main memory until it attempts to reference a page
that is still in auxiliary memory. This condition is called page fault.
When a page fault occurs in a virtual memory system, it signifies that the
page referenced by the CPU is not in main memory. A new page is then
transferred from auxiliary memory to main memory. If main memory is full,
it would be necessary to remove a page from a memory block to make room
for the new page.
The policy for choosing pages to remove is determined from the replacement
algorithm that is used. The goal of a replacement policy is to try to remove
the page least likely to be referenced in the immediate future. Two of the
most common replacement algorithms used are the first-in, first-out (FIFO)
and the least recently used (LRU).
FIFO algorithm selects for replacement the page that has been in memory
the longest time. FIFO will be full whenever memory has no more empty
blocks. When a new page must be loaded, the page least recently brought in
is removed.
Advantage
Easy to implement
Disadvantage
Pages are removed and loaded from memory too frequently.
LRU
The LRU policy is more difficult to implement but has been more attractive
on the assumption that the least recently used page is a better candidate
for removal than the least recently loaded page as in FIFO