SlideShare a Scribd company logo
Load Store Execution in Processors – My learnings
Ramdas M
Core Block diagram

2
Load,Store Instructions
●

Fixed point Load/Stores
–

Ld RT, RA, RB

(Power)

–

St RS, RA, RB

(Power)

–

MOV register, [address]

(x86)

–

MOV [address], register

(x86)

●

Floating point Load/Stores

●

Byte, half-word, word, double word access

●

String forms (Block moves in x86)

●

Locks

●

Memory barriers (sfence, msync etc)

●

Memory types (WB, UC, WT, WC)
3
Question 1
●

What are the various steps in processing a
load/store?

4
Load / Store Processing
●

For both Loads and Stores:

●

Effective Address Generation:
–
–

●

Must wait on register value
Must perform address calculation

Address Translation:
–

●

Must access TLB, Can potentially induce a page fault (exception)

For Loads: D-cache Access (Read)
–
–

Check aliasing against store buffer for possible load forwarding

–
●

Can potentially induce a D-cache miss
If bypassing store, must be flagged as “speculative” load until completion

For Stores: D-cache Access (Write)
–

When completing must check aliasing against “speculative” loads

–

After completion, wait in store buffer for access to D-cache

–

Can potentially induce a D-cache miss
5
LSU pipeline
●

RegFile Access
–

●

Address Generation
–

●

Read the source registers

Add base, displacement, immediate fields to generate an EA

Cache Access
–
–

Bank access if cache is multi-banked

–
●

Index into set, tag comparison for ways

TLB access

Results
–
–

●

Target registers write back for loads
Store buffer/cache updates for stores

Finish
–

Post instruction status (complete or flush etc)
6
Addressing modes
●

An addressing mode is a mechanism for specifying an address.

●

absolute: the address is provided directly

●

●

●

register: the address is provided indirectly, but specifying where (what register)
the address can be found.
displacement: the address is computed by adding a displacement to the
contents of a register
indexed: the address is computed by adding a displacement to the contents of
a register, and then adding in the contents of another register times some
constant.

7
Pipeline Diagrams
●

Some Pipeline Diagrams to illustrate
–

L1 Hit loads/stores

–

L1Miss, L2 Hit

–

L1,L2 Miss

–

TLB Misses

8
Pipeline Arbitration
●

Loads/Stores from Issue Unit

●

Re-executing loads/stores that missed DL1 or DTLB

●

Line Fills from L2

●

Snoops from different agent in case of MP

●

Data Prefetches

9
Sub Units
●

Load/Store Engine
–

–
●

Load/Store execution pipeline

2-3 pipelines present in modern designs

L1 Data cache
–

Multi-banked for simultaneous access to same line from multiple pipelines

–

Bank conflicts between loads/stores and snoops

–

virtually/physically indexed
●

–

Virtual indexing helps simultaneous access to TLB, but needs handling
aliases.

WB/WT
●

WB saves bandwidth on writes to L2, but needs handling snoops

–

Inclusive/Exclusive

–

Line Size
10
Sub Units
●

Data TLBs
–

–
●

Caches Virtual to Physical translations

TLB miss will cause load or store to stall.

Load Miss Queue
–

Tracks line fill requests to L2

–

Ld/St that miss DL1 including ownership upgrades

–

Handles multiple ld/store misses to same cacheline

–

Restarts loads/stores as line fills arrive
●

Critical data forwarding to re-executing loads

●

L2Hit Restart for best load to use latency during L2 hit cases

●

Store Buffers

●

Load/Store Re-order queue

●

Data Prefetch

●

Exceptions

11
Alignments
●

Aligned
–

●

Aligned on an operand sized boundary

Unaligned
–
–

●

Access crossing operand sized boundary
Might get broken down into multiple access

Line Crossing
–
–

Broken down into 2 access and data gets merged together

–
●

Access crossing cachelines.

Not guaranteed to be atomic (both x86, Power)

Page Crossing
–

Access crossing page boundaries

–

Broken down into 2 access, 2 TLB/Page miss handling
12
Unaligned Access
●

How unaligned accesses are handled?

13
Memory Data Dependences
Question 2
●

Why is it hard to handle memory dependency?

15
Memory Data Dependencies
●

Memory Dependency Detection:
–
–

Effective addresses can depend on run-time data and other instructions

–
●

Must compute effective addresses of both memory references

Comparison of addresses require much wider comparators

Hard to handle memory dependencies
–

Memory address are much wider than register names (64bit vs 5bits)

–

Memory dependencies are not static
●

A load (or store) instruction’s address can change (e.g. loop)

–

Addresses need to be calculated and translated first

–

Memory instructions take longer to execute relative to other instructions
●

Cache misses can take 100s of cycles

●

TLB misses can take 100s of cycles
16
Simple In-order Load/store Processing:
Total Load-Store Order
●

●

●

Keep all loads and stores totally in order
However Loads and stores can execute out of order with respect to other types
of instructions while obeying register data-dependence
Question: So when can a store actually write to cache ?
–

What if we write to cache as it execute ?

17
Store Buffers
●

Stores
–

Allocate store buffer entry at DISPATCH (in-order)

–

When register value available, issue & calculate address (“finished”)

–

When all previous instructions retire, store considered completed
●

–
●

Store buffer split into “finished” and “completed” part though pointers

Completed stores go to memory/cache inorder

Loads
–

Loads remember the store buffer entry of the last store before them

–

A load can issues when address register value availabe and
●

All older stores are considered “completed”

●

Q1: What happens to Store buffer when say a branch mispredicts ?

●

Q2: What happens when a snoop hit a Store Buffer entry ?
18
Load Bypassing & Forwarding
●

Load Bypassing

Load Forwarding

19
Load Bypassing & Forwarding
●

Bypassing
–

–

Store addresses still need to be computed before loads can be issued to
allow checking for load dependences.

–

●

Loads can be allowed to bypass stores (if no aliasing).

If dependence cannot be checked, e.g. store address cannot be determined,
then all subsequent loads are held until address is valid (conservative).

Forwarding
–

If a subsequent load has a dependence on a store still in the store buffer, it
need not wait till the store is issued to the data cache.

–

The load can be directly satisfied from the store buffer if the address is
valid and the data is available in the store buffer.

20
Load Forwarding

Q: In case of multiple match, which store do we forward from ?
Q: In case of partial match, can we forward ?
21
Non-Speculative Disambiguation
●

Non-speculative load/store disambiguation
–
–

Full address comparison

–

●

Loads wait for addresses of all prior stores

Bypass if no match, forward if match

Can limit performance:
–

load r5,MEM[r3]

cache miss

–

store r7, MEM[r5]

RAW for agen, stalled

–

…

–

load r8, MEM[r9]

independent load stalled

22
Speculative Disambiguation
•

What if aliases are rare?
1.
2.
3.
4.

Loads don’t wait for addresses of all
prior stores
Full address comparison of stores that
are ready
Bypass if no match, forward if match
Check all store addresses when they
commit
–
–

5.

No matching loads – speculation was
correct
Matching unbypassed load – incorrect
speculation

Replay starting from incorrect load
Speculative Disambiguation: Safe Speculation

• i1 and i2 issue out of program order
• i1 checks load queue at commit (no match)
Speculative Disambiguation: Violation

• i1 and i2 issue out of program order
• i1 checks load queue at commit (match)
– i2 marked for replay
Load/Store Re-order queues

26
Memory Dependence Prediction
●

If aliases are rare: static prediction
–
–

●

Predict no alias every time (Blind prediction)
Pay misprediction penalty rarely

If aliases are more frequent: dynamic prediction
–

Use some form of history tables for loads

–

Store Set Algorithm
●
●

●

Allow speculation of loads around stores when program starts

If a load and store causes violation, add the PC of store to the
load's store set.
Next time the load executes, it waits for all stores in the store
set
27
Prediction Implementation (Intel Core 2)
•
•
•

•
•

History table indexed by Instruction Pointer
Each entry in the history array has a saturating counter
Once counter saturates: disambiguation possible on this load (take
effect since next iteration) -load is allowed to go even meet unkown
store addresses
When a particular load failed disambiguation: reset its counter
Each time a particular load correctly disambiguated: increment
counter
Data Prefetching
●

S/W Prefetching
–

–
●

Instructions like prefetch (x86),

Cache touch instructions (Power)

H/W Prefetching
–

Speculation about future memory access patterns based on previous
patterns

–

Hardware monitors the processor's address reference pattern and issues
prefetch if a predictable memory address pattern is detected

29
Exceptions
●

Alignment Exceptions

●

Page Faults

●

Cache Parity Errors

30

More Related Content

What's hot (20)

PPTX
Storage area network
Syed Ubaid Ali Jafri
 
PDF
GSM: Handovers
Dr. Ramchandra Mangrulkar
 
PDF
How video codec work
Leandro Moreira
 
PPTX
Classifications of wireless adhoc networks
ArunChokkalingam
 
PDF
Stft vs. mfcc
Muhammad Rizwan
 
PPTX
GDPS and System Complex
Najmi Mansoor Ahmed
 
PDF
Cluster Schedulers
Pietro Michiardi
 
PPTX
Moving target indicator radar (mti)part2
abdulrehmanali
 
PDF
Channel assignment strategies
Sri Manakula Vinayagar Engineering College
 
DOC
Synchronization of telecom network
Deepak Kumar
 
PPT
Palo alto networks next generation firewalls
Castleforce
 
PDF
Part 6: Standalone and Non-Standalone 5G - 5G for Absolute Beginners
3G4G
 
PDF
UMTS Protocols
Naveen Jakhar, I.T.S
 
PDF
Using SS7 & SIGTRAN to Solve Today's Network Challenges
Continuous Computing
 
PPTX
Gprs ppt
Shams Tabrez
 
PPTX
Understanding iptables
Denys Haryachyy
 
PDF
VoLTE optimization.pdf
RakhiJadav1
 
PPTX
VMware Advance Troubleshooting Workshop - Day 2
Vepsun Technologies
 
PDF
GGSN-Gateway GPRS Support Node
Mustafa Golam
 
DOCX
Burst
Azlin lolin
 
Storage area network
Syed Ubaid Ali Jafri
 
How video codec work
Leandro Moreira
 
Classifications of wireless adhoc networks
ArunChokkalingam
 
Stft vs. mfcc
Muhammad Rizwan
 
GDPS and System Complex
Najmi Mansoor Ahmed
 
Cluster Schedulers
Pietro Michiardi
 
Moving target indicator radar (mti)part2
abdulrehmanali
 
Channel assignment strategies
Sri Manakula Vinayagar Engineering College
 
Synchronization of telecom network
Deepak Kumar
 
Palo alto networks next generation firewalls
Castleforce
 
Part 6: Standalone and Non-Standalone 5G - 5G for Absolute Beginners
3G4G
 
UMTS Protocols
Naveen Jakhar, I.T.S
 
Using SS7 & SIGTRAN to Solve Today's Network Challenges
Continuous Computing
 
Gprs ppt
Shams Tabrez
 
Understanding iptables
Denys Haryachyy
 
VoLTE optimization.pdf
RakhiJadav1
 
VMware Advance Troubleshooting Workshop - Day 2
Vepsun Technologies
 
GGSN-Gateway GPRS Support Node
Mustafa Golam
 

Viewers also liked (6)

PDF
Cracking Digital VLSI Verification Interview: Interview Success
Ramdas Mozhikunnath
 
PPTX
SystemVerilog based OVM and UVM Verification Methodologies
Ramdas Mozhikunnath
 
PPTX
Randomization and Constraints - Workshop at BMS College
Ramdas Mozhikunnath
 
PDF
Memory consistency models and basics
Ramdas Mozhikunnath
 
PPTX
Advances in Verification - Workshop at BMS College of Engineering
Ramdas Mozhikunnath
 
PDF
Verification Engineer - Opportunities and Career Path
Ramdas Mozhikunnath
 
Cracking Digital VLSI Verification Interview: Interview Success
Ramdas Mozhikunnath
 
SystemVerilog based OVM and UVM Verification Methodologies
Ramdas Mozhikunnath
 
Randomization and Constraints - Workshop at BMS College
Ramdas Mozhikunnath
 
Memory consistency models and basics
Ramdas Mozhikunnath
 
Advances in Verification - Workshop at BMS College of Engineering
Ramdas Mozhikunnath
 
Verification Engineer - Opportunities and Career Path
Ramdas Mozhikunnath
 
Ad

Similar to Load Store Execution (20)

PDF
shieh06a
Richard Chen
 
PPT
Chapter 7 cpu struktur dan fungsi
risal07
 
PPTX
Code scheduling constraints
ArchanaMani2
 
PDF
Memory Management Strategies - I.pdf
Harika Pudugosula
 
PDF
RISCV_Memory_Patterns_VF2024_final_copy.pdf
VibarajanViswanathan1
 
PDF
Final report
Vineel Reddy G
 
PPTX
Introduction to armv8 aarch64
Yi-Hsiu Hsu
 
PPT
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Hsien-Hsin Sean Lee, Ph.D.
 
PPTX
CPU Caches
shinolajla
 
PPTX
ARM-7 ADDRESSING MODES INSTRUCTION SET
SasiBhushan22
 
PPTX
ch8 Memory Management OS.pptx
Indhu Periys
 
PPT
Al2ed chapter2
Abdullelah Al-Fahad
 
PDF
CH08.pdf
ImranKhan880955
 
PPTX
MES_MODULE 2.pptx
Shivakumar M
 
PPTX
Operating Systems Module 4_Memory Management.pptx
AmbikaVenkatesh4
 
PPTX
GCC for ARMv8 Aarch64
Yi-Hsiu Hsu
 
PPT
Lec6 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Instruction...
Hsien-Hsin Sean Lee, Ph.D.
 
PPT
ch 3_The CPU_modified.ppt of central processing unit
Toyba2
 
PPTX
ARM Programming.pptxARM instructions process data held in registers and only ...
DebasishMohanta16
 
PPT
Reorder buf
Aarsh Ps
 
shieh06a
Richard Chen
 
Chapter 7 cpu struktur dan fungsi
risal07
 
Code scheduling constraints
ArchanaMani2
 
Memory Management Strategies - I.pdf
Harika Pudugosula
 
RISCV_Memory_Patterns_VF2024_final_copy.pdf
VibarajanViswanathan1
 
Final report
Vineel Reddy G
 
Introduction to armv8 aarch64
Yi-Hsiu Hsu
 
Lec10 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Memory part2
Hsien-Hsin Sean Lee, Ph.D.
 
CPU Caches
shinolajla
 
ARM-7 ADDRESSING MODES INSTRUCTION SET
SasiBhushan22
 
ch8 Memory Management OS.pptx
Indhu Periys
 
Al2ed chapter2
Abdullelah Al-Fahad
 
CH08.pdf
ImranKhan880955
 
MES_MODULE 2.pptx
Shivakumar M
 
Operating Systems Module 4_Memory Management.pptx
AmbikaVenkatesh4
 
GCC for ARMv8 Aarch64
Yi-Hsiu Hsu
 
Lec6 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Instruction...
Hsien-Hsin Sean Lee, Ph.D.
 
ch 3_The CPU_modified.ppt of central processing unit
Toyba2
 
ARM Programming.pptxARM instructions process data held in registers and only ...
DebasishMohanta16
 
Reorder buf
Aarsh Ps
 
Ad

Recently uploaded (20)

PDF
CloudStack GPU Integration - Rohit Yadav
ShapeBlue
 
PPTX
Top Managed Service Providers in Los Angeles
Captain IT
 
PDF
Why Orbit Edge Tech is a Top Next JS Development Company in 2025
mahendraalaska08
 
PDF
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
Blockchain Transactions Explained For Everyone
CIFDAQ
 
PDF
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
PDF
HR agent at Mediq: Lessons learned on Agent Builder & Maestro by Tacstone Tec...
UiPathCommunity
 
PDF
Meetup Kickoff & Welcome - Rohit Yadav, CSIUG Chairman
ShapeBlue
 
PPTX
Simplifying End-to-End Apache CloudStack Deployment with a Web-Based Automati...
ShapeBlue
 
PDF
Arcee AI - building and working with small language models (06/25)
Julien SIMON
 
PPTX
Darren Mills The Migration Modernization Balancing Act: Navigating Risks and...
AWS Chicago
 
PDF
Impact of IEEE Computer Society in Advancing Emerging Technologies including ...
Hironori Washizaki
 
PDF
Market Wrap for 18th July 2025 by CIFDAQ
CIFDAQ
 
PDF
How Current Advanced Cyber Threats Transform Business Operation
Eryk Budi Pratama
 
PDF
Empowering Cloud Providers with Apache CloudStack and Stackbill
ShapeBlue
 
PPTX
Extensions Framework (XaaS) - Enabling Orchestrate Anything
ShapeBlue
 
PDF
Novus-Safe Pro: Brochure-What is Novus Safe Pro?.pdf
Novus Hi-Tech
 
PDF
Novus Safe Lite- What is Novus Safe Lite.pdf
Novus Hi-Tech
 
CloudStack GPU Integration - Rohit Yadav
ShapeBlue
 
Top Managed Service Providers in Los Angeles
Captain IT
 
Why Orbit Edge Tech is a Top Next JS Development Company in 2025
mahendraalaska08
 
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Blockchain Transactions Explained For Everyone
CIFDAQ
 
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
HR agent at Mediq: Lessons learned on Agent Builder & Maestro by Tacstone Tec...
UiPathCommunity
 
Meetup Kickoff & Welcome - Rohit Yadav, CSIUG Chairman
ShapeBlue
 
Simplifying End-to-End Apache CloudStack Deployment with a Web-Based Automati...
ShapeBlue
 
Arcee AI - building and working with small language models (06/25)
Julien SIMON
 
Darren Mills The Migration Modernization Balancing Act: Navigating Risks and...
AWS Chicago
 
Impact of IEEE Computer Society in Advancing Emerging Technologies including ...
Hironori Washizaki
 
Market Wrap for 18th July 2025 by CIFDAQ
CIFDAQ
 
How Current Advanced Cyber Threats Transform Business Operation
Eryk Budi Pratama
 
Empowering Cloud Providers with Apache CloudStack and Stackbill
ShapeBlue
 
Extensions Framework (XaaS) - Enabling Orchestrate Anything
ShapeBlue
 
Novus-Safe Pro: Brochure-What is Novus Safe Pro?.pdf
Novus Hi-Tech
 
Novus Safe Lite- What is Novus Safe Lite.pdf
Novus Hi-Tech
 

Load Store Execution

  • 1. Load Store Execution in Processors – My learnings Ramdas M
  • 3. Load,Store Instructions ● Fixed point Load/Stores – Ld RT, RA, RB (Power) – St RS, RA, RB (Power) – MOV register, [address] (x86) – MOV [address], register (x86) ● Floating point Load/Stores ● Byte, half-word, word, double word access ● String forms (Block moves in x86) ● Locks ● Memory barriers (sfence, msync etc) ● Memory types (WB, UC, WT, WC) 3
  • 4. Question 1 ● What are the various steps in processing a load/store? 4
  • 5. Load / Store Processing ● For both Loads and Stores: ● Effective Address Generation: – – ● Must wait on register value Must perform address calculation Address Translation: – ● Must access TLB, Can potentially induce a page fault (exception) For Loads: D-cache Access (Read) – – Check aliasing against store buffer for possible load forwarding – ● Can potentially induce a D-cache miss If bypassing store, must be flagged as “speculative” load until completion For Stores: D-cache Access (Write) – When completing must check aliasing against “speculative” loads – After completion, wait in store buffer for access to D-cache – Can potentially induce a D-cache miss 5
  • 6. LSU pipeline ● RegFile Access – ● Address Generation – ● Read the source registers Add base, displacement, immediate fields to generate an EA Cache Access – – Bank access if cache is multi-banked – ● Index into set, tag comparison for ways TLB access Results – – ● Target registers write back for loads Store buffer/cache updates for stores Finish – Post instruction status (complete or flush etc) 6
  • 7. Addressing modes ● An addressing mode is a mechanism for specifying an address. ● absolute: the address is provided directly ● ● ● register: the address is provided indirectly, but specifying where (what register) the address can be found. displacement: the address is computed by adding a displacement to the contents of a register indexed: the address is computed by adding a displacement to the contents of a register, and then adding in the contents of another register times some constant. 7
  • 8. Pipeline Diagrams ● Some Pipeline Diagrams to illustrate – L1 Hit loads/stores – L1Miss, L2 Hit – L1,L2 Miss – TLB Misses 8
  • 9. Pipeline Arbitration ● Loads/Stores from Issue Unit ● Re-executing loads/stores that missed DL1 or DTLB ● Line Fills from L2 ● Snoops from different agent in case of MP ● Data Prefetches 9
  • 10. Sub Units ● Load/Store Engine – – ● Load/Store execution pipeline 2-3 pipelines present in modern designs L1 Data cache – Multi-banked for simultaneous access to same line from multiple pipelines – Bank conflicts between loads/stores and snoops – virtually/physically indexed ● – Virtual indexing helps simultaneous access to TLB, but needs handling aliases. WB/WT ● WB saves bandwidth on writes to L2, but needs handling snoops – Inclusive/Exclusive – Line Size 10
  • 11. Sub Units ● Data TLBs – – ● Caches Virtual to Physical translations TLB miss will cause load or store to stall. Load Miss Queue – Tracks line fill requests to L2 – Ld/St that miss DL1 including ownership upgrades – Handles multiple ld/store misses to same cacheline – Restarts loads/stores as line fills arrive ● Critical data forwarding to re-executing loads ● L2Hit Restart for best load to use latency during L2 hit cases ● Store Buffers ● Load/Store Re-order queue ● Data Prefetch ● Exceptions 11
  • 12. Alignments ● Aligned – ● Aligned on an operand sized boundary Unaligned – – ● Access crossing operand sized boundary Might get broken down into multiple access Line Crossing – – Broken down into 2 access and data gets merged together – ● Access crossing cachelines. Not guaranteed to be atomic (both x86, Power) Page Crossing – Access crossing page boundaries – Broken down into 2 access, 2 TLB/Page miss handling 12
  • 13. Unaligned Access ● How unaligned accesses are handled? 13
  • 15. Question 2 ● Why is it hard to handle memory dependency? 15
  • 16. Memory Data Dependencies ● Memory Dependency Detection: – – Effective addresses can depend on run-time data and other instructions – ● Must compute effective addresses of both memory references Comparison of addresses require much wider comparators Hard to handle memory dependencies – Memory address are much wider than register names (64bit vs 5bits) – Memory dependencies are not static ● A load (or store) instruction’s address can change (e.g. loop) – Addresses need to be calculated and translated first – Memory instructions take longer to execute relative to other instructions ● Cache misses can take 100s of cycles ● TLB misses can take 100s of cycles 16
  • 17. Simple In-order Load/store Processing: Total Load-Store Order ● ● ● Keep all loads and stores totally in order However Loads and stores can execute out of order with respect to other types of instructions while obeying register data-dependence Question: So when can a store actually write to cache ? – What if we write to cache as it execute ? 17
  • 18. Store Buffers ● Stores – Allocate store buffer entry at DISPATCH (in-order) – When register value available, issue & calculate address (“finished”) – When all previous instructions retire, store considered completed ● – ● Store buffer split into “finished” and “completed” part though pointers Completed stores go to memory/cache inorder Loads – Loads remember the store buffer entry of the last store before them – A load can issues when address register value availabe and ● All older stores are considered “completed” ● Q1: What happens to Store buffer when say a branch mispredicts ? ● Q2: What happens when a snoop hit a Store Buffer entry ? 18
  • 19. Load Bypassing & Forwarding ● Load Bypassing Load Forwarding 19
  • 20. Load Bypassing & Forwarding ● Bypassing – – Store addresses still need to be computed before loads can be issued to allow checking for load dependences. – ● Loads can be allowed to bypass stores (if no aliasing). If dependence cannot be checked, e.g. store address cannot be determined, then all subsequent loads are held until address is valid (conservative). Forwarding – If a subsequent load has a dependence on a store still in the store buffer, it need not wait till the store is issued to the data cache. – The load can be directly satisfied from the store buffer if the address is valid and the data is available in the store buffer. 20
  • 21. Load Forwarding Q: In case of multiple match, which store do we forward from ? Q: In case of partial match, can we forward ? 21
  • 22. Non-Speculative Disambiguation ● Non-speculative load/store disambiguation – – Full address comparison – ● Loads wait for addresses of all prior stores Bypass if no match, forward if match Can limit performance: – load r5,MEM[r3] cache miss – store r7, MEM[r5] RAW for agen, stalled – … – load r8, MEM[r9] independent load stalled 22
  • 23. Speculative Disambiguation • What if aliases are rare? 1. 2. 3. 4. Loads don’t wait for addresses of all prior stores Full address comparison of stores that are ready Bypass if no match, forward if match Check all store addresses when they commit – – 5. No matching loads – speculation was correct Matching unbypassed load – incorrect speculation Replay starting from incorrect load
  • 24. Speculative Disambiguation: Safe Speculation • i1 and i2 issue out of program order • i1 checks load queue at commit (no match)
  • 25. Speculative Disambiguation: Violation • i1 and i2 issue out of program order • i1 checks load queue at commit (match) – i2 marked for replay
  • 27. Memory Dependence Prediction ● If aliases are rare: static prediction – – ● Predict no alias every time (Blind prediction) Pay misprediction penalty rarely If aliases are more frequent: dynamic prediction – Use some form of history tables for loads – Store Set Algorithm ● ● ● Allow speculation of loads around stores when program starts If a load and store causes violation, add the PC of store to the load's store set. Next time the load executes, it waits for all stores in the store set 27
  • 28. Prediction Implementation (Intel Core 2) • • • • • History table indexed by Instruction Pointer Each entry in the history array has a saturating counter Once counter saturates: disambiguation possible on this load (take effect since next iteration) -load is allowed to go even meet unkown store addresses When a particular load failed disambiguation: reset its counter Each time a particular load correctly disambiguated: increment counter
  • 29. Data Prefetching ● S/W Prefetching – – ● Instructions like prefetch (x86), Cache touch instructions (Power) H/W Prefetching – Speculation about future memory access patterns based on previous patterns – Hardware monitors the processor's address reference pattern and issues prefetch if a predictable memory address pattern is detected 29