0% found this document useful (0 votes)
30 views25 pages

Cache Memory and Virtual Memory

This document summarizes cache memory and virtual memory. It discusses different cache memory mapping techniques including direct mapping, complete mapping, and set-associative mapping. Direct mapping facilitates easy access but has issues with data overwritten. Complete mapping allows data to move freely but has slow search times. Set-associative mapping combines benefits of the two by dividing cache into sets of blocks. Examples are provided to illustrate two-way and four-way set-associative mapping.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views25 pages

Cache Memory and Virtual Memory

This document summarizes cache memory and virtual memory. It discusses different cache memory mapping techniques including direct mapping, complete mapping, and set-associative mapping. Direct mapping facilitates easy access but has issues with data overwritten. Complete mapping allows data to move freely but has slow search times. Set-associative mapping combines benefits of the two by dividing cache into sets of blocks. Examples are provided to illustrate two-way and four-way set-associative mapping.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

STRUCTURE OF THE COMPUTER I

WORKSHOP: CACHE MEMORY AND VIRTUAL MEMORY

Students:
 Lourdes Barreto Gómez
 Mauricio Maldonado Iturriago

Octuber 2018
ARCHITECTURE OF THE CACHÉ MEMORY
• Direct mapping
A cache memory is a small part of RAM (random access memory), its main objective
is to recover data easily, This method of direct mapping facilitates the access of a
computer, because in a space of the cache that shares with other pieces of data,
every piece of data in the memory is assigned, the data saved in the cache is
overwritten when new data needs to be saved.
This method also decides where the blocks in the cache will be stored, in a very
simple and easy way, to each block of memory is added a line in the cache. In the
memory is greater than from the cache, If one of these lines is already full when
writing the block in it, it is overwritten. This saves the processor searching time,
because each time it requests a data, the cache controller will only have to go to that
location to find the information, but this method also has faults, because, if a program
you need to continuously access several data blocks that share the same line of a
direct assignment cache, this line will be overwritten very often. And the computer
when taking some of this data is less likely to be the data that should be at that
moment in the cache line.

EXAMPLE:
Offset = 2 bits
Index bits = log2(16/4) = 2 bits
Instruction Length = log2(2048) = 11 bits
Tag = 11 bits - 2 bits - 2 bits = 7 bits
Block = 7 bits + 2 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively

Index requested will be searched in cache as highlighted in yellow

Valid bit will be obtained and analysed.


Following is the analysis diagram.
Valid bit is 0, therefore CACHE MISS is obtained. Cache is updated with the new
dataset

Cache table is updated accordingly. Block 1E1 with offset 0 to 3 is transferred to


cache
The instruction has been converted from
hex to binary and allocated to tag, index,
and offset respectively

Index requested will be searched in cache as highlighted in yellow

Valid bit will be obtained and analysed.


Following is the analysis diagram.
Valid bit is 0, therefore CACHE MISS is obtained. Cache is updated with the new
dataset
Cache table is updated accordingly. Block 52 with offset 0 to 3 is transferred to
cache

The instruction has been converted


from hex to binary and allocated to tag,
index, and offset respectively
Index requested will be searched in cache as highlighted in yellow

Valid bit will be obtained and analysed.


Following is the analysis diagram.
Valid bit is 1, therefore we should look into the tag. Requested Tag and cached tag
is NOT the same. Therefore, CACHE MISS
Cache replace the old index. Since dirty bit is 0, there is no additional operation
required
Cache table is updated accordingly. Block 3A with offset 0 to 3 is transferred to
cache

The instruction has been converted


from hex to binary and allocated to tag,
index, and offset respectively

Valid bit will be obtained and analysed.


Following is the analysis diagram.
Valid bit is 0, therefore CACHE MISS is obtained. Cache is updated with the new
dataset
Cache table is updated accordingly. Block 80 with offset 0 to 3 is transferred to
cache

• Complete association
It allows to move the blocks of the main memory, to any free block of the cache
memory, the search of the information that was had almost always is in the cache,
although this data search is done by means of its indexes and it is slow, because
you have to go through the blocks of the cache until you find the block of memory
that you want.
EXAMPLE:
Offset = 2 bits
Instruction Length = log2(2048) = 11 bits
Block = 11 bits - 2 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively.

Index requested will be searched in whole cache


No cache contains 111111111 as value, therefore cache MISS is obtained
The new cache data is imported to cache

Index requested will be searched in


whole cache

No cache contains 110101000 as value, therefore cache MISS is obtained.


The new cache data is imported to cache.

The instruction has been converted


from hex to binary and allocated to
tag, index, and offset respectively

Index requested will be searched in whole cache


No cache contains 101011000 as value, therefore cache MISS is obtained
The new cache data is imported to cache.

The instruction has been converted


from hex to binary and allocated to
tag, index, and offset respectively
Index requested will be searched in whole cache
No cache contains 111101011 as value, therefore cache MISS is obtained.
The new cache data is imported to cache.

The instruction has been converted


from hex to binary and allocated to tag,
index, and offset respectively
Index requested will be searched in whole cache
No cache contains 111110111 as value, therefore cache MISS is obtained.
The new cache data is imported to cache.

ASSOCIATIVE IN VARIOUS ROUTE SETS


It combines the characteristics of the two methods presented above (direct mapping
and complete associative). It is divided into sets of two, four or eight, each containing
multiple sectors containing a block of data. These methods are more difficult to
implement.
This technique is responsible for dividing the cache in q sets, and each of which has
r blocks.
Example for the case in which the number of blocks contained in a set r = 2.
• Two ways
It has two sets, and only two places for a block of memory data. This reduces your
search time and the possibility that the used data will frequently overwrite each other.
EXAMPLE:
Offset = 2 bits
Index bits = log2(16/4/2) = 1 bits
Instruction Length = log2(2048) = 11 bits
Tag = 11 bits - 2 bits - 1 bits = 8 bits
Block = 8 bits + 1 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively

Index requested will be searched in cache as highlighted in yellow

Valid bit will be obtained and analysed.


Following is the analysis diagram
Both of valid bit is 0, therefore both of AND gate is MISS

OR gate is updated from cache blocks result. Both of the AND gate is MISS,
therefore CACHE MISS

Cache table is updated accordingly. Block 65 with offset 0 to 3 is copied into the
cache
The instruction has been converted from
hex to binary and allocated to tag, index,
and offset respectively

Index requested will be searched in cache as highlighted in yellow


Valid bit will be obtained and analysed.
Following is the analysis diagram
Both of valid bit is 0, therefore both of AND gate is MISS
OR gate is updated from cache blocks result. Both of the AND gate is MISS,
therefore CACHE MISS
Cache table is updated accordingly. Block 10C with offset 0 to 3 is copied into the
cache
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively

Valid bit is 1, therefore we should look into the both cache table. Requested Tag
and cached tag is NOT the same.

OR gate is updated from cache blocks result. Both of the AND gate is MISS,
therefore CACHE MISS
Cache table is updated accordingly. Block 145 with offset 0 to 3 is copied into the
cache
• Four ways
It has four sets, and four places for a block of memory data. This further reduces
your search time and the possibility that the used data will frequently overwrite each
other.
EXAMPLE:
Offset = 2 bits
Index bits = log2(16/4/4) = 0 bits
Instruction Length = log2(2048) = 11 bits
Tag = 11 bits - 2 bits - 0 bits = 9 bits
Block = 9 bits + 0 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to
tag, index, and offset respectively
Index requested will be searched in cache as highlighted in yellow

Valid bit will be obtained and analysed

Following is the analysis diagram.


All valid bit is 0. Therefore, all of AND gate is MISS
OR gate is updated from cache blocks result.All of the AND gate is MISS, therefore
CACHE MISS

Cache table is updated accordingly. Block 185 with offset 0 to 3 is copied into the
cache
MEMORIA VIRTUAL
EXAMPLE:
Offset = 2 bits
Instruction Length = log2(2048) = 11 bits
Physical Page Rows = 128 / 2^ 2 = 32 rows
Page Table Rows = 2048 / 2^2 = 512 rows
TLB Rows= 10 rows
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively

Index requested will be searched in whole TLB


There is no valid page in TLB.
Page is continue to be searched in Page Table.

Page requested is not found in Page Table. Data will be loaded from Secondary
Memory. TLB, Page Table and Physical Memory is updated accordingly
REFERENCES

 [1]"Glosario informático - Definición de términos informáticos", Glosarioit.com, 2018.


[Online]. Available: https://www.glosarioit.com/Mapeo_directo. [Accessed: 26- Oct-
2018].
 [2]"¿Qué es el mapeo directo? / Prucommercialre.com", Prucommercialre.com, 2018.
[Online]. Available: http://www.prucommercialre.com/que-es-el-mapeo-directo/.
[Accessed: 26- Oct- 2018].
 [3]"Glosario informático - Definición de términos informáticos", Glosarioit.com, 2018.
[Online]. Available: https://www.glosarioit.com/Asociativa_completa. [Accessed: 26-
Oct- 2018].
 [4]"Glosario informático - Definición de términos informáticos", Glosarioit.com, 2018.
[Online]. Available:
https://www.glosarioit.com/Asociativa_en_conjuntos_de_varias_v%C3%ADas.
[Accessed: 26- Oct- 2018].
 [5]A. Delgado Domínguez, "Organización de la memoria cachéASOCIATIVA POR
CONJUNTOS", Ajbeas.webcindario.com, 2018. [Online]. Available:
https://ajbeas.webcindario.com/asocon.html. [Accessed: 26- Oct- 2018].
 [6]Memoria Virtual. Estructura de Computadores, Facultad de Informática, UCM,
Curso 11-12, 2018. Available:
http://www.fdi.ucm.es/profesor/jjruz/WEB2/Temas/EC7.pdf

You might also like