Cache Memory and Virtual Memory
Cache Memory and Virtual Memory
Students:
Lourdes Barreto Gómez
Mauricio Maldonado Iturriago
Octuber 2018
ARCHITECTURE OF THE CACHÉ MEMORY
• Direct mapping
A cache memory is a small part of RAM (random access memory), its main objective
is to recover data easily, This method of direct mapping facilitates the access of a
computer, because in a space of the cache that shares with other pieces of data,
every piece of data in the memory is assigned, the data saved in the cache is
overwritten when new data needs to be saved.
This method also decides where the blocks in the cache will be stored, in a very
simple and easy way, to each block of memory is added a line in the cache. In the
memory is greater than from the cache, If one of these lines is already full when
writing the block in it, it is overwritten. This saves the processor searching time,
because each time it requests a data, the cache controller will only have to go to that
location to find the information, but this method also has faults, because, if a program
you need to continuously access several data blocks that share the same line of a
direct assignment cache, this line will be overwritten very often. And the computer
when taking some of this data is less likely to be the data that should be at that
moment in the cache line.
EXAMPLE:
Offset = 2 bits
Index bits = log2(16/4) = 2 bits
Instruction Length = log2(2048) = 11 bits
Tag = 11 bits - 2 bits - 2 bits = 7 bits
Block = 7 bits + 2 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively
• Complete association
It allows to move the blocks of the main memory, to any free block of the cache
memory, the search of the information that was had almost always is in the cache,
although this data search is done by means of its indexes and it is slow, because
you have to go through the blocks of the cache until you find the block of memory
that you want.
EXAMPLE:
Offset = 2 bits
Instruction Length = log2(2048) = 11 bits
Block = 11 bits - 2 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively.
OR gate is updated from cache blocks result. Both of the AND gate is MISS,
therefore CACHE MISS
Cache table is updated accordingly. Block 65 with offset 0 to 3 is copied into the
cache
The instruction has been converted from
hex to binary and allocated to tag, index,
and offset respectively
Valid bit is 1, therefore we should look into the both cache table. Requested Tag
and cached tag is NOT the same.
OR gate is updated from cache blocks result. Both of the AND gate is MISS,
therefore CACHE MISS
Cache table is updated accordingly. Block 145 with offset 0 to 3 is copied into the
cache
• Four ways
It has four sets, and four places for a block of memory data. This further reduces
your search time and the possibility that the used data will frequently overwrite each
other.
EXAMPLE:
Offset = 2 bits
Index bits = log2(16/4/4) = 0 bits
Instruction Length = log2(2048) = 11 bits
Tag = 11 bits - 2 bits - 0 bits = 9 bits
Block = 9 bits + 0 bits = 9 bits
The instruction has been converted
from hex to binary and allocated to
tag, index, and offset respectively
Index requested will be searched in cache as highlighted in yellow
Cache table is updated accordingly. Block 185 with offset 0 to 3 is copied into the
cache
MEMORIA VIRTUAL
EXAMPLE:
Offset = 2 bits
Instruction Length = log2(2048) = 11 bits
Physical Page Rows = 128 / 2^ 2 = 32 rows
Page Table Rows = 2048 / 2^2 = 512 rows
TLB Rows= 10 rows
The instruction has been converted
from hex to binary and allocated to tag,
index, and offset respectively
Page requested is not found in Page Table. Data will be loaded from Secondary
Memory. TLB, Page Table and Physical Memory is updated accordingly
REFERENCES