0% found this document useful (0 votes)
181 views

Name: J A D Sampath Registration Number: 510005470

Associative cache mapping allows a block from any memory address to be stored in any cache line. The full memory address is used as the cache tag. When a memory address is requested, all cache tags must be simultaneously compared to the address to check for a match, requiring expensive hardware. If there is a cache miss, the needed block is mapped from main memory to any available cache line, replacing the least recently used block if needed. This avoids cache contention but has high overhead due to parallel tag comparisons.

Uploaded by

vishwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
181 views

Name: J A D Sampath Registration Number: 510005470

Associative cache mapping allows a block from any memory address to be stored in any cache line. The full memory address is used as the cache tag. When a memory address is requested, all cache tags must be simultaneously compared to the address to check for a match, requiring expensive hardware. If there is a cache miss, the needed block is mapped from main memory to any available cache line, replacing the least recently used block if needed. This avoids cache contention but has high overhead due to parallel tag comparisons.

Uploaded by

vishwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

THE OPEN UNIVERSITY OF SRI LANKA FACULTY OF ENGINEERING TECHNOLOGY

THE DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING

EEX5536 - Computer Architecture


ASSIGNMENT 2
Academic Year 2019/2020

Name : J A D Sampath
Registration Number: 510005470
Question 1

Descriptions and Features of 8086


• Enhanced version of 8085Microprocessor that was designed by Intel in 1976.
• It is a 16-bit Microprocessor having 20 address lines and16 data lines that provides up to 1MB storage. It consists of
powerful instruction set, which provides operations like multiplication and division easily.
• It supports two modes of operation, i.e. Maximum mode and Minimum mode. Maximum mode is suitable for system
having multiple processors and Minimum mode is suitable for system having a single processor.
• It has an instruction queue, which is capable of storing six instruction bytes from the memory resulting in faster
processing.
• It was the first 16-bit processor having 16-bit ALU, 16-bit registers, internal data bus, and 16-bit external data bus
resulting in faster processing.
• It is available in 3 versions based on the frequency of operation −
• 8086 → 5MHz
• 8086-2 → 8MHz
• (c)8086-1 → 10 MHz
• It uses two stages of pipelining, i.e. Fetch Stage and Execute Stage, which improves performance.
• Fetch stage can pre-fetch up to 6 bytes of instructions and stores them in the queue.
• Execute stage executes these instructions.
• It has 256 vectored interrupts.
• It consists of 29,000 transistors.
8086 Microprocessor Architecture
MEMORY
INTERFACE


PHYSICAL ADDRESS
INSTRUCTION QUEUE 6
CALCULATION (∑)
(6 BYTES) 5
CS 4

SS 3

DS BUS INTERFACE UNIT (BIU) 2


SEGMENT REGISTERS 1
ES
IP
(16 BIT)

CONTROL SYSTEM

AH AL
BH BL GENERAL PURPOSES
CH CL REGISTERS ALU
DH DL
(16 BIT)
EXECUTION UNIT (EU)
SP
BP
SI
OFFSET REGISTERS
OPERANDS
DI (16 BIT) FLAG
Memory Architecture
1MB
00000H

CS

IP Code Segment

SS
Segment Address Offset Address
SP Stack Segment CS IP
SS SP
DS DS SI

SI Data Segment ES DI

ES

DI Extra Segment

FFFFFH
Steps in an instruction cycle
1) Calculate the Physical Address (PA = Segment Address x 10 + Offset Address)
2) Fetching Instructions from the Memory through Address Bus
3) Transfer fetched instruction data to queue through Data Bus
4) Pass Instruction to Control System to decode the instruction (Identify the Opcode
and Operand of the instruction)
5) While the instruction is executing new instruction is fetching and refill the queue
(when two bytes are empty), this is called Piping
6) Control System distribute the decoded instruction
7) Arithmetic Logical Unit (ALU) does the necessary logical operation and result will
send to respective general purpose registers
8) Update the Operand register and Flag register
MEMORY
INTERFACE


PHYSICAL ADDRESS 6
INSTRUCTION QUEUE
CALCULATION (∑) 5
(6 BYTES)
4
CS
3
SS
2
DS BUS INTERFACE UNIT (BIU)
ES
SEGMENT REGISTERS 1

IP (16 BIT)

CONTROL SYSTEM

AH AL
BH BL GENERAL PURPOSES
CH CL REGISTERS ALU
DH DL
(16 BIT)
EXECUTION UNIT (EU)
SP
BP
SI
OFFSET REGISTERS
OPERANDS
DI (16 BIT) FLAG
Question 2

Associative Cache
Cache Memory is the fastest and easily accessible memory and when cache hit occurs from the Processor, if the required word is
present in the cache memory the required word is delivered to the CPU from the cache memory. If the required word is not present in
the cache memory then cache miss occurs and the page containing the required word has to be mapped from the main memory and
this mapping is performed using various cache mapping techniques. There are few Cache Mapping techniques are available such as
Direct Mapping, Associative Mapping and Set Associative Mapping. Cache mapping defines how a block from the main memory is
mapped to the cache memory in case of a cache miss or cache mapping is a technique by which the contents of main memory are
brought into the cache memory.
Main memory is divided into equal size partitions called as blocks or frames. Cache memory is divided into partitions having same size
as that of blocks called as lines. During cache mapping, block of main memory is simply copied to the cache lines and the block is not
actually brought from the main memory.
Associative cache mapping is a type of Cache designed to solve the problem of cache contention that plagues the Direct Mapped
Cache. In a fully associative cache, a data block from any memory address may be stored into any cache line, and the whole address is
used as the cache Tag. Hence, when looking for a match, all the tags must be compared simultaneously with any requested address,
which demands expensive extra hardware. However, contention is avoided completely, as no block need ever be flushed unless the
whole cache is full, and then the least recently used may be chosen. There are few Cache Memory replacement algorithms are used to
replace a cache memory such as First in First Out (FIFO), Most Recently Used (MRU), Least Recently Used (LRU) and Randomly.
In associative mapping, the physical address is divided as follows;

Block Number/ Tag Block Offset/


Line

Let’s take the following example;


• Main Memory Size 128 Bytes (2^7), Main Memory address size is 7 bits
• Block Size 8 Bytes (2^3), Block Offset is 3 bits
• Block index Size 4 bits
• Cache Memory Size 32 Bytes (2^5), Cache Address size is 5 bits
At the Associative Cache Mapping the main memory block index is mapped to Tag of Cache Memory and Full memory block is
mapped to the Cache Memory Line/Block Offset.
• Then the Tag index size = Block index size = 4bits
• Block Offset = 3
When the CPU makes a request the least significant 3 bits of the physical address will be the Block Offset and the other
remaining bits will be the Tag bits. Then that tag bit is compared simultaneously with each lines of Cache Memory using
Comparators and if the tag bit matched then there is a cache hit. If there was a cache missed then that data should mapped
from the main memory to any available cache memory, if the cache memory is already full then a free cache line should be
generate using any Cache Memory replacement Algorithm.
Necessary formulas are as follows,

•No. of Blocks in Main Memory = Main Memory Size/Block Size = 2^7/2^3 = 2^4 = 2^Block Index
•No. of Line is a Cache Memory = Cash Memory Size/Block Size= 2^5/2^3 = 2^2 = 2^Cache Index
•Tag Bits = Block Index
•No. of Cash Lines = No. of Comparators
•Size of Comparator = Size of Tag Bit
In an Associative Cache Mapping there is Tag search operation through the whole cache lines using comparators and
therefore there is a huge search overhead. In order to overcome that issue the Associative mapping technique has been
upgraded introducing Set Associative Mapping method. In Set associative mapping the cash memory lines are grouped in to
several sets and memory address has additional block mentioning Set Number.

You might also like