0% found this document useful (0 votes)
19 views18 pages

Dlca 22itc01 Unit 5

The document discusses memory organization in computer architecture, detailing types of memory (volatile and non-volatile), memory hierarchy, and access methods. It covers main memory, auxiliary memory, associative memory, cache memory, and virtual memory, explaining their functions and access techniques. Additionally, it describes cache mapping techniques and the concept of address mapping using pages in virtual memory systems.

Uploaded by

blessy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views18 pages

Dlca 22itc01 Unit 5

The document discusses memory organization in computer architecture, detailing types of memory (volatile and non-volatile), memory hierarchy, and access methods. It covers main memory, auxiliary memory, associative memory, cache memory, and virtual memory, explaining their functions and access techniques. Additionally, it describes cache mapping techniques and the concept of address mapping using pages in virtual memory systems.

Uploaded by

blessy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

DIGITAL LOGIC AND COMPUTER ARCHITECTURE (22ITC01)

FACULTY: E RAMALAKSHMI
UNIT-5
Memory Organization
1. Introduction

A memory unit is the collection of storage units or devices together. The


memory unit stores the binary information in the form of bits. Generally,
memory/storage is classified into 2 categories:

 Volatile Memory: This loses its data, when power is switched off.

 Non-Volatile Memory: This is a permanent storage and does not lose any
data when power is switched off.

2. Memory Hierarchy

The memory hierarchy system consists of all storage devices contained in a


computer from the slow Auxiliary Memory to fast Main Memory and to
smaller Cache memory.

Auxillary memory access time is generally 1000 times that of the main
memory, hence it is at the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to
communicate directly with the CPU and with auxiliary memory devices
through Input/ Output processor.

When the program not residing in main memory is needed by the CPU, they
are brought in from auxiliary memory. Programs not currently needed in
main memory are transferred back into auxiliary memory.

The cache memory is used to store program data which is currently being
executed in the CPU. Approximate access time ratio between cache memory
and main memory is about 1 to 7~10. i.e Cache memory is about 10 times
faster than main memory.

3. Memory Access Methods

Memory is a collection of numerous memory locations. To access data from


any memory, first it must be located and then the data is read from the
memory location. Following are the methods to access information from
memory locations:

 Random Access: Main memories are random access memories, in which


each memory location has a unique address. Using this unique address any
memory location can be reached in the same amount of time in any order.

 Sequential Access: This method allows memory access in a sequence or in


order. Eg. Magnetic Tape

 Direct Access: In this mode, information is stored in tracks, with each


track having a separate read/write head.

4. Main Memory

The memory unit that communicates directly with the CPU, Auxillary
memory and Cache memory, is called the main memory. It is the central
storage unit of the computer system. It is a large and fast memory used to
store data during computer operations. Main memory is made up of larger
sized RAM and relatively lower sized ROM.
5. Auxiliary Memory

Devices that provide backup storage are called auxiliary memory. For
example, Magnetic disks and tapes are commonly used auxiliary devices.
Other devices used as auxiliary memory are magnetic drums and optical
disks. It is not directly accessible to the CPU, and is accessed using the
Input/Output Processor.

6. Associative Memory

An associative memory can be considered as a memory unit whose stored


data can be identified for access by the content of the data itself rather than
by an address or memory location. Associative memory is also referred to as
Content Addressable Memory (CAM).

Since the entire chip can be compared at once, contents are randomly
stored without considering addressing scheme. These chips have less
storage capacity than regular memory chips.

On the other hand, when the word is to be read from an associative


memory, the content of the word, or part of the word, is specified. The
words which match the specified content are located by the memory and are
marked for reading.

The following diagram shows the block representation of an Associative memory


An associative memory consists of a memory array and logic for 'm' words
with 'n' bits per word. The argument register A and key register K each have
‘n’ bits, one for each bit of a word. The match register M consists of m bits,
one for each memory word. The words which are kept in the memory are
compared in parallel with the content of the argument register.

The key register (K) provides a mask for choosing a particular field or key in
the argument word. If the key register contains a binary value of all 1's,
then the entire argument is compared with each memory word. Otherwise,
only those bits in the argument that have 1's in their corresponding
position of the key register are compared.

The following diagram can represent the relation between the memory array
and the external registers in an associative memory
Associative Memory of ‘m’ words and ‘n’ cells per word

The cells present inside the memory array are marked by the letter C with
two subscripts. The first subscript gives the word number and the second
specifies the bit position in the word. For instance, the cell Cij is the cell for
bit j in word i.

A bit Aj in the argument register is compared with all the bits in column j of
the array provided that Kj = 1. This process is done for all columns j = 1, 2,
3......, n. If a match occurs between all the unmasked bits of the argument
and the bits in word i, the corresponding bit Mi in the match register is set
to 1. If one or more unmasked bits of the argument and the word do not
match, Mi is cleared to 0.

When a write operation is performed on associative memory, no address or


memory location is given to the word. The memory itself is capable of
finding an empty unused location to store the word
7. Cache Memory

The data or contents of the main memory that are used again and again by
CPU, are stored in the cache memory so that we can easily access that data
in shorter time. Cache Memory uses the Principle of Locality of Reference

 Temporal locality of reference

A phenomenon in which a computer Program tends to access the same set


of memory locations repeatedly for a particular time period. (eg. Loops in a
program)

 Spatial Locality of Reference

The tendency of the computer program to access instructions whose


addresses are near one another

Average execution time of the program can be reduced by placing this active
portion of program and data in faster cache memory.

Whenever the CPU needs to access memory, it first checks the cache
memory. If the data is not found in cache memory then the CPU moves onto
the main memory. It also transfers the block of recent data into the cache
and keeps on deleting the old data in cache to accommodate new one
When the CPU refers to memory and finds the word in cache it is called as a
hit. If the word is not found in cache, it is in main memory then it is known
as a miss.

 Total No. of Memory Accesses= No. of Hits+ No. of Misses


Total No. of Hits
 Hit Ratio= Total No. of Memory Accesses by CPU

 Average Memory access time = Tavg = h * Tc + (1-h)*(Tm + Tc)

Where,

h- Probability of a hit

Tm-Main Memory Access Time

Tc- Cache Memory Access Time

8. Cache Mapping Techniques

There are three different types of mapping used for the purpose of cache
memory:

 Direct mapping
 Associative mapping
 Set-Associative mapping.

Consider a cache consisting of 128 blocks of 16 words each, for total of


2048 (2K) words and assume that the main memory is addressable by 16
bit address. Main memory is 64K which will be viewed as 4K blocks of 16
words each.
Direct Mapping:-

 The simplest way to store Main Memory blocks in Cache Memory is Direct
Mapping technique

 In this block J of the main memory maps on to block J modulo 128 of the
cache (Line). Thus main memory blocks 0,128, 256,….is loaded into cache
at block 0. Block 1,129,257,….are stored at block 1 and so on

 Placement of a block in the cache is determined from main memory


address. Main memory address is divided into 3 fields, the lower 4-bits
selects one of the 16 words in a block.

 When new block enters the cache, the 7-bit cache block field determines the
cache/Line positions in which this block must be stored.

 The higher order 5-bits of the Main memory address are stored in 5 tag bits
associated with its location in cache. They identify which of the 32 blocks
that are mapped into this cache position are currently resident in the cache

It is easy to implement, but not Flexible


Associative Mapping:-

 This is more flexible mapping method, in which any main memory block
can be placed into any cache block position (Line).

 In this, 12 tag bits are required to identify a memory block when it is


resident in the cache.

 The tag bits of an address received from the processor are compared to the
tag bits of each block of the cache to see, if the desired block is present.
This is known as Associative Mapping technique

 Cost of an associated mapped cache is higher than the cost of direct-


mapped cache because of the match logic required to search all 128 tag
patterns to determine whether a block is in cache

Set-Associated Mapping:-

 It is the combination of Direct and Associative Mapping techniques.

 Cache blocks are grouped into sets and mapping allows block of main
memory reside into any block of a specific set. Hence contention problem of
direct mapping is reduced. Also, hardware cost is reduced by decreasing the
size of associative search.
 For a cache with two blocks per set. In this case, memory block 0, 64,
128,…..,4032 map into cache set 0 and they can occupy any two blocks of
the cache within this set.

 Having 64 sets means that the 6 bit set field of the address determines
which set of the cache might contain the desired block. The tag bits of
address must be associatively compared to the tags of the two blocks of the
set to check if desired block is present. This is two way associative search

9. Virtual Memory

Virtual memory is a concept used in some large computer systems that


permit the user to construct programs as though a large memory space
were available, equal to the totality of auxiliary memory

Each address that is referenced by the CPU goes through an address


mapping from the so-called virtual address to a physical address in main
memory.
Virtual memory is used to give programmers the illusion that they have a
very large memory at their disposal, even though the computer actually has
a relatively small main memory.

A virtual memory system provides a mechanism for translating program-


generated addresses into correct main memory locations.

Address Space and Memory Space

An address used by a programmer will be called a virtual address, and the


set of such addresses is termed as the address space. An address in main
memory is called a physical address. The set of these physical addresses is
called the memory space.

Eg: Consider a computer with a main-memory capacity of 32K words.


Fifteen bits are needed to specify a physical address in memory since 32K =
215

Suppose that the computer has available auxiliary memory for storing 220 =
1024K words. Auxiliary memory =1024K = 220 =25 *215 = 32 main
memories. Denoting the address space by ‘N’ and the memory space by M,
we then have N = 1024K and M = 32K.
Address Mapping Using Pages:

A Memory mapping table is needed to map a virtual address of 20 bits to a


physical address of 15 bits

Memory Table for mapping Virtual Address

Address Mapping Using Pages:

The physical memory is broken down into groups of equal size called blocks,

For example, if a page or block consists of 1K words, then, address space is


divided into 1024 pages and main memory is divided into 32 blocks.

Consider a computer with an address space of 8K and a memory space of 4K. If we


split each into groups of 1K words,we obtain eight pages and four blocks as shown
in Fig. 12-18. At any given time, up to four pages of address space may reside in
main memory in any one of the four blocks.

The mapping from address space to memory space is facilitated if each virtual
address is considered to be represented by two numbers: a page number address
and a line within the page. In a computer with '1! words per page, p bits are used
to specify a line address and the remaining high-order bits of the virtual address
specify the page number. In the example of Fig. 12-18, a virtual address has 13
bits. Since each page consists of 2 10 = 1024 words, the higher order three bits of a
virtual address will specify one of the eight pages and the low-order 10 bits give the
line address within the page. Note that the line address in address space and
memory space is the same; the only mapping required is from a page number to a
block number.

Address Space and Memory Space split into groups of 1K words

The organization of the memory mapping table in a paged system is shown in the
following Fig

Memory Table in a Paged System


The memory-page table consists of eight words, one for each page. The address in
the page table denotes the page number and the content of the word gives the
block number where that page is stored in main memory

The table shows that pages 1, 2, 5, and 6 are now available in main memory in
blocks 3, 0, 1, and 2, respectively.

A presence bit in each location indicates whether the page has been transferred
from auxiliary memory into main memory.

A ‘0’ in the presence bit indicates that this page is not available in main memory.

The CPU references a word in memory with a virtual address of 13 bits. The three
high-order bits of the virtual address specify a page number and also an address
for the memory-page table. The content of the word in the memory page table at
the page number address is read out into the memory table buffer register.

If the presence bit is a 1, the block number thus read is transferred to the two
high-order bits of the main memory address register.

The line number from the virtual address is transferred into the 10 low-order bits
of the memory address register. A read signal to main memory transfers the
content of the word to the main memory buffer register ready to be used by the
CPU. If the presence bit in the word read from the page table is 0, it signifies that
the content of the word referenced by the virtual address does not reside in main
memory. A call to the operating system is then generated to fetch the required page
from auxiliary memory and place it into main memory before resuming
computation

Associative Memory Page Table

A random-access memory page table is inefficient with respect to storage


utilization. In the example of Fig. 12-19 we observe that eight words of memory are
needed, one for each page, but at least four words will always be marked empty
because main memory cannot accommodate more than four blocks.

In general, a system with n pages and m blocks requires a memory-page table of n


locations of which up to m blocks will be marked with block numbers and all
others will be empty.
As a second numerical example, consider an address space of 1024K words and
memory space of 32K words. If each page or block contains 1K words, the number
of pages is 1024 and the number of blocks 32. The capacity of the memory-page
table must be 1024 words and only 32 locations may have a presence bit equal to
1. At any given time, at least 992 locations will be empty and not in use

A more efficient way to organize the page table would be to construct it with a
number of words equal to the number of blocks in main memory. In this way the
size of the memory is reduced and each location is fully utilized.

This method can be implemented by means of an associative memory with each


word in memory containing a page number together with its corresponding block
number. The page field in each word is compared with the page number in the
virtual address. If a match occurs, the word is read from memory and its
corresponding block number is extracted. Consider again the case of eight pages
and four blocks as shown in the Fig. below

An Associative Memory Page Table

Each entry in the associative memory array consists of two fields. The first three
bits specify a field for storing the page number. The last two bits constitute a field
for storing the block number. The virtual address is placed in the argument
register. The page number bits in the argument register are compared with all page
numbers in the page field of the associative memory. If the page number is found,
the 5-bit word is read out from memory. The corresponding block number, being in
the same word, is transferred to the main memory address register. If no match
occurs, a call to the operating system is generated to bring the required page from
auxiliary memory.

Page Replacement

A virtual memory system is a combination of hardware and software


techniques. The memory management software system handles all the
software operations for the efficient utilization of memory space. It must
decide

 which page in main memory ought to be removed to make room for a new
page,
 when a new page is to be transferred from auxiliary memory to main
memory, and
 where the page is to be placed in main memory.

The hardware mapping mechanism and the memory management software


together constitute the architecture of a virtual memory.

When a program starts execution, one or more pages are transferred into
main memory and the page table is set to indicate their position. The
program is executed from main memory until it attempts to reference a page
that is still in auxiliary memory. This condition is called page fault.

When a page fault occurs in a virtual memory system, it signifies that the
page referenced by the CPU is not in main memory. A new page is then
transferred from auxiliary memory to main memory. If main memory is full,
it would be necessary to remove a page from a memory block to make room
for the new page.

The policy for choosing pages to remove is determined from the replacement
algorithm that is used. The goal of a replacement policy is to try to remove
the page least likely to be referenced in the immediate future. Two of the
most common replacement algorithms used are the first-in, first-out (FIFO)
and the least recently used (LRU).
FIFO algorithm selects for replacement the page that has been in memory
the longest time. FIFO will be full whenever memory has no more empty
blocks. When a new page must be loaded, the page least recently brought in
is removed.

Advantage
Easy to implement
Disadvantage
Pages are removed and loaded from memory too frequently.
LRU

The LRU policy is more difficult to implement but has been more attractive
on the assumption that the least recently used page is a better candidate
for removal than the least recently loaded page as in FIFO

You might also like