Cache Memory in Computer Organizatin
Cache Memory in Computer Organizatin
Tutorialskeyboard_arrow_down
Studentkeyboard_arrow_down
Courses
Jobskeyboard_arrow_down
Sign In
Sign In
Home
Courses
Algorithmskeyboard_arrow_down
Data Structureskeyboard_arrow_down
Languageskeyboard_arrow_down
Interview Cornerkeyboard_arrow_down
GATEkeyboard_arrow_down
CS Subjectskeyboard_arrow_down
Studentkeyboard_arrow_down
Jobskeyboard_arrow_down
GBlog
Puzzles
What's New ?
Instruction pipeliningexpand_more
Cache Memoryexpand_more
Memory Interleaving
COA Quizesexpand_more
Cache Memory is a special very high-speed memory. It is used to speed up and synchronizing with high-
speed CPU. Cache memory is costlier than main memory or disk memory but economical than CPU
registers. Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU. It holds frequently requested data and instructions so that they are immediately available to the
CPU when needed.
Cache memory is used to reduce the average time to access data from the Main memory. The cache is a
smaller and faster memory which stores copies of the data from frequently used main memory
locations. There are various different independent caches in a CPU, which store instructions and data.
Click to enlarge
Levels of memory:
Level 1 or Register –
It is a type of memory in which data is stored and accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program counter, address register etc.
Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is temporarily stored for faster access.
It is memory on which computer works currently. It is small in size and once power is off data no longer
stays in this memory.
It is external memory which is not as fast as main memory but data stays permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks for a
corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read
from cache
If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache
miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled
from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
We can improve Cache performance using higher cache block size, higher associativity, reduce miss rate,
reduce miss penalty, and reduce the time to hit in the cache.
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which are as follows:
Direct mapping, Associative mapping, and Set-Associative mapping. These are explained below.
Direct Mapping –
The simplest technique, known as direct mapping, maps each block of main memory into only one
possible cache line. or
In Direct mapping, assigne each memory block to a specific line in the cache. If a line is previously taken
up by a memory block when a new block needs to be loaded, the old block is trashed. An address space
is split into two parts index field and a tag field. The cache is used to store the tag field whereas the rest
is stored in the main memory. Direct mapping`s performance is directly proportional to the Hit ratio.
i = j modulo m
where
For purposes of cache access, each main memory address can be viewed as consisting of three fields.
The least significant w bits identify a unique word or byte within a block of main memory. In most
contemporary machines, the address is at the byte level. The remaining s bits specify one of the 2s
blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits (most significant
portion) and a line field of r bits. This latter field identifies one of the m=2r lines of the cache.
Click to enlarge
Associative Mapping –
In this type of mapping, the associative memory is used to store content and addresses of the memory
word. Any block can go into any line of the cache. This means that the word id bits are used to identify
which word in the block is needed, but the tag becomes all of the remaining bits. This enables the
placement of any word at any place in the cache memory. It is considered to be the fastest and the most
flexible mapping form.
Click to enlarge
Set-associative Mapping –
This form of mapping is an enhanced form of direct mapping where the drawbacks of direct mapping are
removed. Set associative addresses the problem of possible thrashing in the direct mapping method. It
does this by saying that instead of having exactly one line that a block can map to in the cache, we will
group a few lines together creating a set. Then a block in memory can map to any one of the lines of a
specific set..Set-associative mapping allows that each word that is present in the cache can have two or
more words in the main memory for the same index address. Set associative cache mapping combines
the best of direct and associative cache mapping techniques.
In this case, the cache consists of a number of sets, each of which consists of a number of lines. The
relationships are
m=v*k
i= j mod v
where
v=number of sets
Click to enlarge
Usually, the cache memory can store a reasonable number of blocks at any given time, but this number
is small compared to the total number of blocks in the main memory.
The correspondence between the main memory blocks and those in the cache is specified by a mapping
function.
Types of Cache –
Primary Cache –
A primary cache is always located on the processor chip. This cache is small and its access time is
comparable to that of processor registers.
Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the memory. It is referred to as
the level 2 (L2) cache. Often, the Level 2 cache is also housed on the processor chip.
Locality of reference –
Since size of cache memory is less as compared to main memory. So to check which part of main
memory should be given priority and loaded in cache is decided based on locality of reference.
This says that there is a chance that element will be present in the close proximity to the reference point
and next time if again searched then more close proximity to the point of reference.
In this Least recently used algorithm will be used. Whenever there is page fault occurs within a word will
not only load word in main memory but complete page fault will be loaded because spatial locality of
reference rule says that if you are referring any word next word will be referred in its register that’s why
we load complete page table so the complete block will be loaded.
(A) 11
(B) 14
(C) 16
(D) 27
Answer: (C)
Explanation: https://www.geeksforgeeks.org/gate-gate-cs-2012-question-54/
Que-2: Consider the data given in previous question. The size of the cache tag directory is
(C) 40 Kbits
(D) 32 bits
Answer: (A)
Explanation: https://www.geeksforgeeks.org/gate-gate-cs-2012-question-55/
Que-3: An 8KB direct-mapped write-back cache is organized as multiple blocks, each of size 32-bytes.
The processor generates 32-bit addresses. The cache controller maintains the tag information for each
cache block comprising of the following.
1 Valid bit
1 Modified bit
As many bits as the minimum needed to identify the memory block mapped in the cache. What is the
total size of memory needed at the cache controller to store meta-data (tags) for the cache?
(A) 4864 bits
Answer: (D)
Explanation: https://www.geeksforgeeks.org/gate-gate-cs-2011-question-43/
Article Contributed by Pooja Taneja and Vaishali Bhatia. Please write comments if you find anything
incorrect, or you want to share more information about the topic discussed above.
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE
interviews with the CS Theory Course at a student-friendly price and become industry ready.
Recommended Posts:
Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)
Computer Organization and Architecture | Pipelining | Set 2 (Dependencies and Data Hazard)
Article Tags :
GATE CS
Operating Systems
Practice Tags :
Operating Systems
thumb_up
19
2.4
Based on 13 vote(s)
Improve Article
Please write to us at [email protected] to report any issue with the above content.
Post navigation
Previous
Next
Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
Load Comments
auto
TCP/IP Model
GeeksforGeeks
room
Company
About Us
Careers
Privacy Policy
Contact Us
Learn
Algorithms
Data Structures
Languages
CS Subjects
Video Tutorials
Practice
Courses
Company-wise
Topic-wise
How to begin?
Contribute
Write an Article
Videos
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Got It !