0% found this document useful (0 votes)
11 views

Lecture 13- Introduction to Cache

Carleton University

Uploaded by

celestemelody
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Lecture 13- Introduction to Cache

Carleton University

Uploaded by

celestemelody
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Lecture 12: Introduction to

Cache

Rose Gomar
Department of Systems and Computer Engineering
Textbook/Copyright
• Hennessy, John L., and David A. Patterson. Computer architecture: a
quantitative approach. Elsevier, 6th edition, 2017, Chapter 2.
• Hennessy, John L., and David A. Patterson. Computer architecture: a
quantitative approach. Elsevier, 6th edition, Appendix B.
• Hennessy, John L., and David A. Patterson, Computer Organization and
Design: RISC-V edition, Chapter 5.
• Part of the slides are provided by Elsevier (Copyright © 2019, Elsevier
Inc. All rights reserved)

2
What we learn in this lecture?
• Motivation for cache
• Direct-map caches

3
Principle of Locality
• Principles of locality is applied in memory system design
• Programs access a small proportion of their address space at
any time
• Temporal locality
• Items accessed recently are likely to be accessed
again soon
e.g., instructions in a loop, induction variables
• Spatial locality
• Items near those accessed recently are likely to be
accessed soon
• E.g., sequential instruction access, array data
Principle of Locality

Memory
address

Time
[From Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual
Memory. IBM Systems Journal 10(3): 168-192 (1971)]
Taking Advantage of Locality
• Memory hierarchy
• Store everything on disk
• Copy recently accessed (and nearby) items from disk to
smaller DRAM memory
• Main memory
• Copy more recently accessed (and nearby) items from
DRAM to smaller SRAM memory
• Cache memory attached to CPU
Memory Hierarchy Levels
• Block: unit of copying
• May be multiple words

• If accessed data is present in upper level


• Hit: access satisfied by upper level
• Hit ratio: hits/accesses

• If accessed data is absent


• Miss: block copied from lower level
• Time taken: miss penalty
• Miss ratio: misses/accesses
= 1 – hit ratio
• Then accessed data supplied from
upper level
Cache Memory
• Cache memory
The level of the memory hierarchy closest to the CPU
• Given accesses X1, …, Xn–1, Xn

• Block Placement: Where can a


block be placed in the cache?

• Block Identification: How a block


is found if it is in the cache?

• Block Replacement: Which block


should be replaced on a miss?

• Write Strategy: What happens on


a write?
Direct Mapped Cache
• Location determined by address
• Direct mapped: only one choice
(Block address) modulo (#Blocks in cache)

◼ #Blocks is a
power of 2
◼ Use low-order
address bits
Tags and Valid Bits
• How do we know which particular block is stored in a cache location?
• Store block address as well as the data
• Actually, only need the high-order bits
• Called the tag

• What if there is no data in a location?


• Valid bit: 1 = present, 0 = not present
• Initially 0
Example: Cache
• 8-blocks, 1 word/block, direct mapped
• 1 word = 4 bytes =32 bits
• Initial state
Index V Tag Data
000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N
Example: Cache
Word Binary addr Hit/miss Cache block
• 8-blocks, addr
• 1 word/block,
22 10 110
• Direct mapped
• 22 modulo 8 = 6 -> Index V Tag Data
block index 000 N
001 N
010 N
011 N
100 N
101 N
110 N
111 N
Example: Cache
• 8-blocks, Word addr Binary addr Hit/miss Cache block
• 1 word/block, 22 10 110 Miss 110
• Direct mapped
• 22 modulo 8 = 6 -> Index V Tag Data
block index 000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N
Example: Cache

Word addr Binary addr Hit/miss Cache block


26 11 010

Index V Tag Data


000 N
001 N
010 N
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N
Example: Cache

Word addr Binary addr Hit/miss Cache block


26 11 010 Miss 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N
Example: Cache
Word addr Binary addr Hit/miss Cache block
22 10 110
26 11 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N
Example: Cache
Word addr Binary addr Hit/miss Cache block
22 10 110 Hit 110
26 11 010 Hit 010

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N
Discussion: Cache
Word addr Binary addr Hit/miss Cache block
16
3
16

Index V Tag Data


000 N
001 N
010 Y 11 Mem[11010]
011 N
100 N
101 N
110 Y 10 Mem[10110]
111 N
Discussion: Cache
Word addr Binary addr Hit/miss Cache Index
16 10 000 Miss 000
3 00 011 Miss 011
16 10 000 Hit 000

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 11 Mem[11010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N
Discussion: Cache
Word addr Binary addr Hit/miss Cache block
18

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 N
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N
Discussion: Cache
Word addr Binary addr Hit/miss Cache block
18 10 010 Miss 010

Index V Tag Data


000 Y 10 Mem[10000]
001 N
010 Y 10 Mem[10010]
011 Y 00 Mem[00011]
100 N
101 N
110 Y 10 Mem[10110]
111 N
Address
AddressSubdivision
Subdivision
• Assume
• 64 bit address space
• 1024 blocks
• Word size=4 bytes
• We will have
• 10 bit index for the cache
• 2 bits for byte offset for
word alignment (ignored
if aligned)
• Remaining bits will be
stored as a tag (tag will
be also stored in cache)

• In reality each block stores


multiples of word data
4 bytes
Cache specification valid tag

Block 0
Assume:
Block 1
• 64 bit address
• A direct map cache ..
• Cache size= 2n blocks ..
• Block size = 2m (2m+2 bytes) assuming .
word size= 4 bytes
Then:
• 𝑻𝒂𝒈 𝒔𝒊𝒛𝒆 = 𝟔𝟒 − (𝒏 + 𝒎 + 𝟐)
• Total number of bits in a direct-map
cache:

𝟐𝒏 ∗ (𝒃𝒍𝒐𝒄𝒌 𝒔𝒊𝒛𝒆 + 𝒕𝒂𝒈 𝒔𝒊𝒛𝒆 + 𝒗𝒂𝒍𝒊𝒅 𝒃𝒊𝒕)


𝟐𝒏 ∗ (𝟐𝒎 ∗ 𝟑𝟐 + 𝟔𝟒 − 𝒏 − 𝒎 − 𝟐 + 𝟏)
Example/Discussion: Cache specification
How many bits are required for a direct map cache
with 16 KiB of data and four word blocks, assuming
64 bit address?

No. blocks = 16KiB data/ (4 word) = 210 blocks

210 ∗ 4 ∗ 32 + 64 − 10 − 2 − 2 + 1 = 210 ∗ 179 =


179 𝐾𝑖𝐵𝑖𝑡 ~ 22.4 𝐾𝑖𝐵
Example/Discussion: Larger Block Size
• Assume 64 blocks and 16 bytes/block for a direct map
cache,
To what block number does address 1200 map?
Example: Larger Block Size
• 64 blocks, 16 bytes/block
To what block number does address 1200 map?
• Block address = 1200/16 = 75
• Block number = 75 modulo 64 = 11
Block Size Considerations
• Larger blocks should reduce miss rate
• Due to spatial locality

• But in a fixed-sized cache


• Larger blocks  fewer of them
• More competition  increased miss rate

• Larger miss penalty


• Can override benefit of reduced miss rate
• Some optimization techniques may help
Cache Misses
• On cache hit, CPU proceeds normally

• On cache miss (Handled by the CPU control unit and cache


controller)
• Stall the CPU pipeline
• Fetch block from next level of hierarchy
• If instruction cache miss
• Restart instruction fetch
• If data cache miss
• Complete data access
Write-Through
• On data-write hit, could just update the block in cache
• But then cache and memory would be inconsistent

• Write through: also update memory

• But makes writes take longer


• Example: if base CPI = 1, 10% of instructions are stores, write to
memory takes 100 extra cycles
• Effective CPI = 1 + 0.1×100 = 11

• CPI: Average Clock per Instruction for a particular processor

• Any solution to improve performance?


Write-Through
• On data-write hit, could just update the block in cache
• But then cache and memory would be inconsistent

• Write through: also update memory

• But makes writes take longer


• Example: if base CPI = 1, 10% of instructions are stores, write to
memory takes 100 extra cycles
• Effective CPI = 1 + 0.1×100 = 11

• Solution: write buffer


• A queue that holds data while the data are waiting to be written to
memory
• Holds data waiting to be written to memory
• CPU continues immediately
• Only stalls on write if write buffer is already full
Write-Back
• A scheme that handles write by updating values only to
the block in the cache, then writing the modified block to
the lower of the hierarchy when the block is replaced.

• On data-write hit, just update the block in cache


• Keep track of whether each block is dirty

• When a dirty block is replaced


• Write it back to memory
• Can use a write buffer to allow replacing block to be
read first
Write miss policy: Write Allocation
• What should happen on a write miss?
• For write-through
• Allocate on miss: fetch the block
• No allocate: update the portion of the block in the memory but not
put it on the cache

• For write-back
• Usually fetch the block
Example: Intrinsity FastMATH
• Embedded MIPS processor
• 12-stage pipeline
• Instruction and data access on each cycle

• Split cache: separate I-cache and D-cache


• Each 16KB: 256 blocks × 16 words/block
• D-cache: write-through or write-back

• SPEC2000 miss rates


• I-cache: 0.4%
• D-cache: 11.4%
• Weighted average: 3.2%
A Real Processor Example: Intrinsity FastMATH
• 16KB: 256 blocks × 16
words/block
• 32 bit memory address space
A Real Processor Example: Intrinsity FastMATH
• Steps for read:
1. Send the address to the appropriate cache.
• Either from the PC (for an instruction) or from the ALU (for data).
2. If the cache signals hit, the requested word is available on the data lines.
• A block index field is used to control the multiplexor (shown at the bottom
of the figure), which selects the requested word from the 16 words in the
indexed block.
3. If the cache signals miss, we send the address to the main memory. When the
memory returns with the data, we write it into the cache and then read it to fulfill
the request.

• For writes, the Intrinsity FastMATH offers both write-through and write-back,
leaving it up to the operating system to decide which strategy to use for an
application.
• Performance
• Instruction miss rate: 0.4%
• Data miss rate: 11.4%
• Combined miss rate: 3.2%
Measuring Cache Performance
• Components of CPU time
• Program execution cycles
• Includes cache hit time
• Memory stall cycles
• Mainly from cache misses
• With simplifying assumptions:

Memory stall cycles


Memory accesses
=  Miss rate Miss penalty
Program
Instructions Misses
=   Miss penalty
Program Instruction
Example/Discussion: Cache Performance
• Given
• I-cache miss rate = 2%
• D-cache miss rate = 4%
• Miss penalty = 100 cycles
• Base CPI (ideal cache) = 2
• Load & stores are 36% of instructions

Calculate miss cycles for I-cache and D-cache and find actual CPI.
Example: Cache Performance
• Given
• I-cache miss rate = 2%
• D-cache miss rate = 4%
• Miss penalty = 100 cycles
• Base CPI (ideal cache) = 2
• Load & stores are 36% of instructions
• Miss cycles per instruction
• I-cache: 0.02 × 100 = 2 (2% of instructions with 100 cycles miss
penalty)
• D-cache: 0.36 × 0.04 × 100 = 1.44 (36% of the instructions are
load/store with miss rate of 4% and miss penalty of 100 cycles)
• Actual CPI = 2 + 2 + 1.44 = 5.44
• CPI for a perfect cache =2
• What happens if the processor is made faster but the memory
system is not?
Discussion: Cache Performance
• In the previous example assume the CPI is improved and it is 1. How the
performance is compared against the previous example?
• Which portion of the execution time is spent on memory stalls?
Discussion: Cache Performance
• In the previous example assume the CPI is improved and it is 1. How
the performance is compared against the previous example?
• Which portion of the execution time is spent on memory stalls?
• CPI = 1 + 3.44 = 4.44
• CPI with stall/ CPI with perfect cache = 4.44
• CPI=5.44
• Execution time spent on memory stalls = 3.44/5.44 = 63%
• CPI = 4.44
Execution time spent on memory stalls = 3.44/4.44 = 77%
Average Access Time
• Hit time is also important for performance
• Average memory access time (AMAT)
AMAT = Hit time + Miss rate × Miss penalty

• Example
• CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache
miss rate = 5%
• AMAT = 1 + 0.05 × 20 = 2ns
• 2 cycles per instruction
Performance Summary
• When CPU performance increased
• Miss penalty becomes more significant

• Decreasing base CPI


• Greater proportion of time spent on memory stalls

• Increasing clock rate


• Memory stalls account for more CPU cycles

• Can’t neglect cache behavior when evaluating system performance

• How to reduce hit time?


Performance Summary
• When CPU performance increased
• Miss penalty becomes more significant

• Decreasing base CPI


• Greater proportion of time spent on memory stalls

• Increasing clock rate


• Memory stalls account for more CPU cycles

• Can’t neglect cache behavior when evaluating system performance

• How to reduce hit time?


• Associative caches (covered in a future course)

You might also like