0% found this document useful (0 votes)
76 views18 pages

Cache Memory: Maninder Kaur

Cache memory is a small, fast memory located between the CPU and main memory. It stores copies of frequently used instructions and data to accelerate access times. There are different mapping techniques for cache memory, including direct mapping, associative mapping, and set associative mapping. When the cache is full, replacement algorithms like LRU are used to determine which content to remove. The cache can write to main memory using either a write-through or write-back policy.

Uploaded by

Misha Anony
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views18 pages

Cache Memory: Maninder Kaur

Cache memory is a small, fast memory located between the CPU and main memory. It stores copies of frequently used instructions and data to accelerate access times. There are different mapping techniques for cache memory, including direct mapping, associative mapping, and set associative mapping. When the cache is full, replacement algorithms like LRU are used to determine which content to remove. The cache can write to main memory using either a write-through or write-back policy.

Uploaded by

Misha Anony
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

CACHE MEMORY

Maninder Kaur
[email protected]

1 www.eazynotes.com 24-Nov-2010
What is Cache Memory?
 Cache memory is a small, high-speed RAM buffer located
between the CPU and main memory.

 Cache memory holds a copy of the instructions (instruction


cache) or data (operand or data cache) currently being used
by the CPU.

 T h e main purpose of a cache is to accelerate your computer


while keeping the price of the computer low.

2 www.eazynotes.com 24-Nov-2010
Placement of Cache in computer

3 www.eazynotes.com 24-Nov-2010
Hit Ratio
 T h e ratio of the total number of hits divided by the total
CPU accesses to memory (i.e. hits plus misses) is called Hit
Ratio.

 H i t Ratio = Total Number of Hits / (Total


Number of Hits + Total Number of
Miss)

4 www.eazynotes.com 24-Nov-2010
Example
A system with 512 x 12 cache and 32 K x 12 of
main memory.

5 www.eazynotes.com 24-Nov-2010
Types of Cache Mapping

1. Direct Mapping

2. Associative Mapping

3. Set Associative Mapping

6 www.eazynotes.com 24-Nov-2010
1. Direct Mapping
 The direct mapping technique is simple and inexpensive to implement.

 When the CPU wants to access data from memory, it places a address. The
index field of CPU address is used to access address.
 The tag field of CPU address is compared with the associated tag in the word
read from the cache.

 If the tag-bits of CPU address is matched with the tag-bits of cache, then there is
a hit and the required data word is read from cache.
 If there is no match, then there is a miss and the required data word is stored in
main memory. It is then transferred from main memory to cache memory with
the new tag.

7 www.eazynotes.com 24-Nov-2010
1. Direct Mapping

8 www.eazynotes.com 24-Nov-2010
1. Direct Mapping

9 www.eazynotes.com 24-Nov-2010
2. Associative Mapping
 A n associative mapping uses an associative memory.

 This memory is being accessed using its contents.

 Each line of cache memory will accommodate the address


(main memory) and the contents of that address from the
main memory.
 That is why this memory is also called Content Addressable
Memory (CAM). It allows each block of main memory to be
stored in the cache.

10 www.eazynotes.com 24-Nov-2010
2. Associative Mapping

11 www.eazynotes.com 24-Nov-2010
3. Set Associative Mapping
 That is the easy control of the direct mapping cache and the
more flexible mapping of the fully associative cache.
 In set associative mapping, each cache location can have
more than one pair of tag + data items.
 That is more than one pair of tag and data are residing at the
same location of cache memory. If one cache location is
holding two pair of tag + data items, that is called 2-way set
associative mapping.

12 www.eazynotes.com 24-Nov-2010
3. Two-Way Set Associative
Mapping

13 www.eazynotes.com 24-Nov-2010
Replacement Algorithms of
Cache Memory
 Replacement algorithms are used when there are no available space in a cache in which to
place a data. Four of the most common cache replacement algorithms are described
below:
 Least Recently Used (LRU):
 The LRU algorithm selects for replacement the item that has been least recently used by
the CPU.

 First-In-First-Out (FIFO):
 The FIFO algorithm selects for replacement the item that has been in the cache from the
longest time.

 Least Frequently Used (LRU):


 The LRU algorithm selects for replacement the item that has been least frequently usedby
the CPU.

 Random:
 The random algorithm selects for replacement the item randomly.

14 www.eazynotes.com 24-Nov-2010
Writing into Cache
 When memory write operations are performed, CPU first
writes into the cache memory. These modifications made by
CPU during a write operations, on the data saved in cache,
need to be written back to main memory or to auxiliary
memory.

 These two popular cache write policies (schemes) are:


Write-Through
Write-Back

15 www.eazynotes.com 24-Nov-2010
Write-Through
 I n a write through cache, the main memory is updated each
time the CPU writes into cache.

 T h e advantage of the write-through cache is that the main


memory always contains the same data as the cache contains.

 This characteristic is desirable in a system which uses direct


memory access scheme of data transfer. The I/O devices
communicating through DMAreceive the most recent data.

16 www.eazynotes.com 24-Nov-2010
Writ e-Back
 I n a write back scheme, only the cache memory is updated
during a writeoperation.

 T h e updated locations in the cache memory are marked


by aflag so that later on, when the word is removed from the
cache, it is copied into the main memory.

 T h e words are removed from the cache time to time to m


e
ak
room for a new block of words.

17 www.eazynotes.com 24-Nov-2010
18 Jan 26, 2013 www.eazynotes.com

You might also like