The document summarizes research on cache memory and performance issues. It discusses that the purpose of cache memory is to quickly store and access instructions and data that the CPU frequently uses, rather than accessing the slower main memory. The summary then outlines six basic cache optimizations: using a larger block size, larger cache size, and higher associativity can reduce miss rates, while multilevel caches, prioritizing reads over writes, and avoiding address translation during indexing can reduce miss penalties and hit times.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
17 views
Project Proposal
The document summarizes research on cache memory and performance issues. It discusses that the purpose of cache memory is to quickly store and access instructions and data that the CPU frequently uses, rather than accessing the slower main memory. The summary then outlines six basic cache optimizations: using a larger block size, larger cache size, and higher associativity can reduce miss rates, while multilevel caches, prioritizing reads over writes, and avoiding address translation during indexing can reduce miss penalties and hit times.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2
Research paper :
Cache memory and analysis on performance issues
09-5-019 ─
Saeed ullah Group leader: Sheryar akif
ID 9124 SECTION: A 1
Purpose of cache memory:
The purpose of cache memory is to store program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information quickly from the cache rather than having to get it from computer's main memory.
Summary: There is 6 Basic Cache Optimizations….
Reducing the Miss Rate
1.Larger Block size (Compulsory misses)
2.Larger Cache size (Capacity misses)
3.Higher Associativity (Conflict misses)
Reducing the Miss Penalty
4. Multilevel Caches
5. Giving Reads Priority over Writes
• E.g. Read complete before earlier writes in write buffer
Reducing the time to hit in the cache
6. Avoiding Address Translation during Cache Indexing
• E.g., Overlap TLB and cache access, Virtual Addressed Caches