0% found this document useful (0 votes)
49 views

35-Cache Memory Block Identification in Direct Mapping, Associate Mapping and Set Associate-06-03-2

This document discusses cache memory organization and architecture. It begins by defining key terms like cache hits, cache misses, and miss penalty. It then covers cache memory management techniques like direct mapping, set associative mapping, and fully associative mapping. The document discusses how blocks are identified in a cache using tags, indexes, and offsets. It also covers cache block replacement policies like LRU and FIFO. Finally, it provides examples of calculating hit latency in a direct mapped cache.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

35-Cache Memory Block Identification in Direct Mapping, Associate Mapping and Set Associate-06-03-2

This document discusses cache memory organization and architecture. It begins by defining key terms like cache hits, cache misses, and miss penalty. It then covers cache memory management techniques like direct mapping, set associative mapping, and fully associative mapping. The document discusses how blocks are identified in a cache using tags, indexes, and offsets. It also covers cache block replacement policies like LRU and FIFO. Finally, it provides examples of calculating hit latency in a direct mapped cache.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Computer Architecture and

Organization
• Course Code: BCSE205L
• Course Type: Theory (ETH)
• Slot: A2+TA2
• Timings:
Monday 14:00-14:50
Wednesday 15:00-15:50
Friday 16:00-16:50

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Module:4
Memory System Organization and Architecture
Cache Memory: BLOCK IDENTIFICATION

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


• A item found to be in the cache is said to correspond to a cache hit
• Items not currently in the cache are called cache misses
• The additional number of cycles required to serve the miss is called ‘miss
penalty’
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑓𝑜𝑢𝑛𝑑 𝑖𝑛 𝑡ℎ𝑒 𝑐𝑎𝑐ℎ𝑒
• 𝐻𝑖𝑡 𝑟𝑎𝑡𝑖𝑜 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑒𝑚𝑜𝑟𝑦 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑚𝑖𝑠𝑠𝑒𝑑 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑐𝑎𝑐ℎ𝑒
• 𝑀𝑖𝑠𝑠 𝑟𝑎𝑡𝑖𝑜 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑒𝑚𝑜𝑟𝑦 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠
• 𝐻𝑖𝑡 𝑟𝑎𝑡𝑖𝑜 + 𝑀𝑖𝑠𝑠 𝑟𝑎𝑡𝑖𝑜𝑛 = 1
Cache Memory Management Techniques
Direct Mapping

Block Placement Set Associative

Fully Associative

Tag

Block Identification Index

Offset

FCFS

Block Replacement LRU

Random

Write Through

Write back
Update Policies
Write around

Write allocate
Cache Memory Management Techniques

Block Identification Tag

Index

Offset
Tag Index Offset

• Main Memory Size=2m


• Main Memory block Size=2n
• Cache Size=2k
• Number of block in Main Memory
=(2m)/(2n)=2m-n

• Number of lines in a Cache


=(2k)/(2n)=2k-n
Tag (m-k) Index(k-n) Offset (n)

• Candidates (main memory block) for each cache


line (Tag)
= 2m-n i.e 2m-k
2k-n
• Number of lines in a Cache (Index)
=(2k)/(2n)=2k-n
• Each cache line contains the same number of
bytes as in a memory block.
• Offset is =n ; Index=k-n ; tag =m-k
Cache: Mappings (Cache Memory Management Techniques)

Cache Memory Management Techniques

Block Placement Direct Mapping

Set Associative

Fully Associative

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Drawback of direct mapping
The drawback of direct mapping is conflict miss.

Consider a example, If the cache has 4 lines (line 0 to line 3). Then according to the direct mapping where
the following blocks are present. 4, 8, 16, 16, 20, 12, and 24.

4 is get into 4%4=0==Line 0


8 is get into 8%4=0==Line 0

Even if there are many lines in the cache, they will never be used due to the direct mapping restriction. This
is the conflict miss issue with direct mapping.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Drawback of direct mapping
The drawback of direct mapping is conflict miss.

• Conflict miss is different than capacity miss.


• Capacity miss is, in case the cache doesn’t have the capacity then come across
them is a capacity miss.
• Conflict miss means, even though there is a lot of other space you are not
going to use that space and replace a place that will be missed later.
• In Direct mapping, the miss is because of the conflict not because of the
capacity.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Hit Latency calculating in Direct mapping
When using direct mapping, does the CPU always know exactly where in the cache to look for a
block of an MM when it outputs an address?
• The CPU can go immediately to that line of cache memory, take the tag, and compare it to the
location generated by the tag for the main memory.
• The CPU would then be able to determine whether it was in the cache or not. If the cache does not
have it, CPU shall go to the MM and get it.

• Hit latency is the amount of time it takes to determine if the needed block is in the cache or not.
• Even if a miss occurs, time is still consumed.
• Miss-latency is the term used to describe the amount of time needed to address a miss.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Hit Latency calculating in Direct mapping
Let us assume that, 4 lines are present in the cache. And each line is having a tag. And also assume
the number of bits present in the tag is one bit. (Just for simplification)

Here Number of lines are 4 and the Tag bit is just one bit.
There is a multiplexer, since there are 4 cache lines and one-bit
tag, the multiplexer size is 4by1.
To select (check or choose) which one among the 4 lines depends
on the line number in the CPU address field

Then whatever bit is present in the tag field of a selected line


number, for example, if the Line Number field is 01 and it is
directly connected to the 01 line number as shown figure.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Hit Latency calculating in Direct mapping
Whenever 01 is the line number (LN) in the address field, then that particular line number is selected in the cache
line. If, for that line number the tag bit present there is zero (0) then that bit value will be coming out. The multiplex
output is zero.

If the address field tag bit is also zero as shown in the


diagram, then the address field tag bit and multiplexer
output is given to the comparator.

• Now comparator will compare the multiplexer output tag


bit and tag field value of the address,
• If both are matching then it will produce an output of
1, which means it is hit.
• If it is not matching then it will produce an output of 0
then it is a miss.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Hit Latency calculating in Direct mapping

The time taken to do the entire process is


hit latency. It is calculated by considering
the latency of the multiplexer and the
latency of the comparator.

Hit Latency= Latency of the multiplexer + Latency of the comparator.

If the latency of the multiplexer is 1ns and the comparator


is 1ns
Then hit lance of direct mapping is 1ns+1ns=2ns
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
Hit Latency calculating in Direct mapping
If there are two bits in tag.
Two multiplexers will be employed. Because each multiplier will only be
able to extract one bit from the tag bits. The quantity of multiplexers
needed would therefore depend on the amount of tag bits.

Two multiplexers are required if the tag bits are two, with the first one
being connected to the first tag bit and the second one to the second
tag bit of a cache line.
Both multipliers are connected to the address filed line bits.
As seen in the picture, there are just two bits of address filed tag lines
in comparison to both multiplexer outlines.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Hit Latency calculating in Direct mapping
• If tag bits are K-bits, then required multiplexers are K. (since each multiplexer is capable of reading only one
bit of tag then for K bits k multiplexers are required).

• Here one comparator is sufficient. The reason is, we have to search only for one tag field (i.e., only for one
cache line, because here if the required block is present in the cache, it will present only in one particular
cache-line.).

• How many bit its (the comparator) able to compare? : K bits.

• The important note is, even though there are k multiplexers but just consider the latency of one multiplexer
because all of them work in parallel.
• Therefore consider the latency of one multiplexer and one required size bits comparator. Most of cases
multiplexer delays are negligible.
• If the comparator delay is 10*k ns, where k is the number of tag bits and there are 2 bits in the tag field.
• Then what is the delay? 10*2=20ns
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore
Hit Latency calculating in Direct mapping
Problem: Main memory size is 1GB and cache size is 1MB, the propagation delay of comparator size is 10Kns
then what is hit latency.
Solution: Find out the tags.
MM/Cache ratio=1GB/1MB=1*230/1*220=210
• What it means, 210 main memory blocks are map to one cache line.
• There for from these, 10 bits are required in the tag.
• Then comparator delay required is 10Kns =10*10ns=100ns

Note: In direct mapping, the required number of comparators are always one. Because we exactly know where to
look for (i.e., which cache line). If it presents there it hit. If it is not present there then miss and it also means that it
is not present anywhere in the cache.

Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore


Cache Memory Management Techniques

Block Replacement LRU

FIFO

RANDOM
Block Replacement

• Least Recently Used: (LRU)


• Replace that block in the set that has been in the cache longest with no reference to it.
• First Come First Out: (FIFO)
• Replace that block in the set that has been in the cache longest.
• Least Frequently Used: (LFU)
• Replace that block in the set that has experienced the fewest references
Dr. Venkata Phanikrishna B, SCOPE, VIT-Vellore

You might also like