0% found this document useful (0 votes)
5 views

Module-5

Uploaded by

Vinayak Garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module-5

Uploaded by

Vinayak Garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Module-5

Memory Unit

• Basic concepts, Internal organization of


memory chips, Structure of larger memory
• Cache memories, Mapping functions
Basic Concepts
• The maximum size of the memory that can be used in any
computer is determined by the addressing scheme.
• Ex. 16-bit computer that generates 16-bit addresses is capable
of addressing up to 216 = 64K memory locations.
• Similarly, machines whose instructions generate 32-bit
addresses can utilize a memory that contains up to 2 32 =4G
(giga) memory locations
• Machines with 40-bit addresses can access up to 2 40 = 1T
(tera) locations.
• The number of locations represents the size of the address
space of the computer.
Basic Concepts
Basic Concepts
• From the system standpoint, we can view the memory unit as a
black box.
• Data transfer between the memory and the processor takes
place through the use of two processor registers, usually called
MAR (memory address register) and MDR (memory data
register)
• If MAR is k bits long and MDR is n bits long, then the
memory unit may contain up to 2k addressable locations.
• During a memory cycle, n bits of data are transferred between
the memory and the processor. This transfer takes place over
the processor bus, which has k address lines and n data lines.
• The bus also includes the control lines Read/Write (R/W) and
Memory Function Completed (MFC) for coordinating data
transfers.
Basic Concepts
• The processor reads data from the memory by loading the
address of the required memory location into the MAR register
and setting the R/W line to 1.
• The memory responds by placing the data from the addressed
location onto the data lines, and confirms this action by asserting
the MFC signal. Upon receipt of the MFC signal, the processor
loads the data on the data lines into the MDR register.
• The processor writes data into a memory location by loading
the address of this location into MAR and loading the data into
MDR. It indicates that a write operation is involved by setting
the R/W line to 0.
• If read or write operations involve consecutive address locations
in the main memory, then a "block transfer" operation can be
performed in which the only address sent to the memory is the
one that identifies the first location.
Internal Organization of Memory Chip
• Memory cells are usually organized in the form of an array, in
which each cell is capable of storing one bit of information.
• A possible organization is illustrated in Figure 5.2. Each row of
cells constitutes a memory word, and all cells of a row are
connected to a common line referred to as the word line,
which is driven by the address decoder on the chip.
• The cells in each column are connected to a Sense/Write
circuit by two bit lines. The Sense/Write circuits are connected
to the data input/output lines of the chip.
• During a Read operation, these circuits sense, or read, the
information stored in the cells selected by a word line and
transmit this information to the output data lines. During a
Write operation, the Sense/Write circuits receive input
information and store it in the cells of the selected word.
Internal Organization of Memory Chip
Internal Organization of Memory Chip
• Figure 5.2 is an example of a very small memory chip
consisting of 16 words of 8 bits each. This is referred to as a
16 x 8 organization.
• The data input and the data output of each Sense/Write
circuit are connected to a single bidirectional data line that
can be connected to the data bus of a computer.
• Two control lines, R/W and CS, are provided in addition to
address and data lines.
• The R/W (Read/Write) input specifies the required operation,
and the CS (Chip Select) input selects a given chip in a
multichip memory system.
• The memory circuit in Figure 5.2 stores 128 bits and requires
14 external connections for address, data, and control lines.
Of course, it also needs two lines for power supply and
ground connections.
Internal Organization of Memory Chip
Organization of 1K x 1 memory chip.
Internal Organization of Memory Chip
Organization of 1K x 1 memory chip
• Consider now a slightly larger memory circuit, one that has 1K
(1024) memory cells.
• This circuit can be organized into a 1K x 1 format.
• In this case, a 10-bit address is needed, but there is only one
data line, resulting in 15 external connections. Figure 5.3
shows such an organization.
• The required 10-bit address is divided into two groups of 5
bits each to form the row and column addresses for the cell
array.
• A row address selects a row of 32 cells, all of which are
accessed in parallel. However, according to the column
address, only one of these cells is connected to the external
data line by the output multiplexer and input demultiplexer.
Structure of Large Memory
Organization of a 2M x 32 memory module using 512K x 8 static memory
chips.
• Consider a memory consisting of 2M (2097152) words of 32bits each.
Figure 5.10 shows how we can implement this memory using 512K x 8
static memory chips. Each column in the figure consists of four chips,
which implement one byte position. Four of these sets provide the required
2M x 32 memory. Each chip has a control input called Chip Select. When
this input is set to l, it enables the chip to accept data from or to place data
on its data lines. The data output for each chip is of the three-state type.
Only the selected chip places data on the data output line, while all other
outputs are in the high-impedance state. Twenty one address bits are
needed to select a 32-bit word in this memory. The high-order 2 bits of the
address are decoded to determine which of the four Chip Select control
signals should be activated, and the remaining 19 address bits are used to
access specific byte locations inside each chip of the selected row. The
R/W inputs of all chips are tied together to provide a common Read/Write
control (not shown in the figure).
Structure of Large Memory

Organization of a 2M x 32 memory module using 512K x 8


static memory chips.
Cache Memory

The cache is logically between the CPU and main memory.


Physically, there are several possible places it could be located.
Mapping Function

• To discuss possible methods for specifying where


memory blocks are placed in the cache, Let us see
one small example.
• Consider a cache consisting of 128 blocks of 16
words each, for a total of 2048 (2K) words,
• Assume that the main memory is addressable by a 16-
bit address.
• The main memory has 64K words, which we will
view as 4K blocks of 16 words each.
• Note: assume that consecutive addresses refer to
consecutive words.
Mapping Function
1.Direct Mapping
• The simplest way to determine cache locations in which
to store memory blocks is the direct-mapping technique.
• In this technique, block j of the main memory maps onto
block j modulo 128 of the cache, as depicted in Figure
5.15.
• Here whenever one of the main memory blocks 0, 128,
256, ... is loaded in the cache, it is stored in cache block 0.
Blocks I, 129, 257, ... are stored in cache block 1, and so
on.
• Since more than one memory block is mapped onto a
given cache block position, contention may arise for that
position even when the cache is not full.
• For example, instructions of program may start in
block 1 and continue in block 129, possibly after a
branch. As this program is executed, both of these
blocks must be transferred to the block-1 position in
the cache. Contention is resolved by allowing the
new block to overwrite the currently resident block.
In this case, the replacement algorithm is trivial.
• Placement of a block in the cache is determined from
the memory address. The memory address can be
divided into three fields.
Mapping Function

1.Direct Mapping
1.Direct Mapping
• The low-order 4 bits select one of 16 words in a block.
• When a new block enters the cache, the 7-bit cache block field
determines the cache position in which this block must be
stored.
• The high-order 5 bits of the memory address of the block are
stored in 5 tag bits associated with its location in the cache.
• They identify which of the 32 blocks that are mapped into this
cache position are currently resident in the cache.
• As execution proceeds, the 7-bit cache block field of each
address generated by the processor points to a particular block
location in the cache..
• As execution proceeds, the 7-bit cache block field of
each address generated by the processor points to a
particular block location in the cache.
• The high-order 5 bits of the address are compared
with the tag bits associated with that cache location.
If they match, then the desired word is in that block
of the cache.
• If there is no match, then the block containing the
required word must first be read from the main
memory and loaded into the cache.
• The direct-mapping technique is easy to implement,
but it is not very flexible.
Mapping Function
2. Associative Mapping
1. Associative Mapping
• This shows a much more flexible mapping method, in which
a main memory block can be placed into any cache bloc
• 12 tag bits are required to identify a memory block when it is
resident in the cache. The tag bits of an address received
from the processor are compared to the tag bits of each block
of the cache to see if the desired block is present.
• It gives complete freedom in choosing the cache location in
which to place the memory block.
• A new block that has to be brought into the cache has to
replace (eject) an existing block only if the cache is full.
• The cost of an associative cache is higher than the cost of a
direct-mapped cache because of the need to search all 128tag
patterns to determine whether given block is in the cache.
Mapping Function
3. Set-Associative Mapping
Set-Associative Mapping- Blocks of the cache are
grouped into sets, and the mapping allows a block of
the main memory to reside in any block of a specific
set.
1. Hence, the contention problem of the direct
method is eased by having a few choices for block
placement. At the same time, the hardware cost is
reduced by decreasing the size of the associative
search.
2. In this case, memory blocks 0, 64, 128, ..., 4032
map into cache set 0, and they can occupy either of
the two block positions within this set.
3. Having 64 sets means that the 6-bit set field of the
address determines which set of the cache might
contain the desired block. The tag field of the address
must then be associatively compared to the tags of the
two blocks of the set to check if the desired block is
present. This two-way associative search is simple to
implement.

You might also like