Name: J A D Sampath Registration Number: 510005470
Name: J A D Sampath Registration Number: 510005470
Name : J A D Sampath
Registration Number: 510005470
Question 1
∑
PHYSICAL ADDRESS
INSTRUCTION QUEUE 6
CALCULATION (∑)
(6 BYTES) 5
CS 4
SS 3
CONTROL SYSTEM
AH AL
BH BL GENERAL PURPOSES
CH CL REGISTERS ALU
DH DL
(16 BIT)
EXECUTION UNIT (EU)
SP
BP
SI
OFFSET REGISTERS
OPERANDS
DI (16 BIT) FLAG
Memory Architecture
1MB
00000H
CS
IP Code Segment
SS
Segment Address Offset Address
SP Stack Segment CS IP
SS SP
DS DS SI
SI Data Segment ES DI
ES
DI Extra Segment
FFFFFH
Steps in an instruction cycle
1) Calculate the Physical Address (PA = Segment Address x 10 + Offset Address)
2) Fetching Instructions from the Memory through Address Bus
3) Transfer fetched instruction data to queue through Data Bus
4) Pass Instruction to Control System to decode the instruction (Identify the Opcode
and Operand of the instruction)
5) While the instruction is executing new instruction is fetching and refill the queue
(when two bytes are empty), this is called Piping
6) Control System distribute the decoded instruction
7) Arithmetic Logical Unit (ALU) does the necessary logical operation and result will
send to respective general purpose registers
8) Update the Operand register and Flag register
MEMORY
INTERFACE
∑
PHYSICAL ADDRESS 6
INSTRUCTION QUEUE
CALCULATION (∑) 5
(6 BYTES)
4
CS
3
SS
2
DS BUS INTERFACE UNIT (BIU)
ES
SEGMENT REGISTERS 1
IP (16 BIT)
CONTROL SYSTEM
AH AL
BH BL GENERAL PURPOSES
CH CL REGISTERS ALU
DH DL
(16 BIT)
EXECUTION UNIT (EU)
SP
BP
SI
OFFSET REGISTERS
OPERANDS
DI (16 BIT) FLAG
Question 2
Associative Cache
Cache Memory is the fastest and easily accessible memory and when cache hit occurs from the Processor, if the required word is
present in the cache memory the required word is delivered to the CPU from the cache memory. If the required word is not present in
the cache memory then cache miss occurs and the page containing the required word has to be mapped from the main memory and
this mapping is performed using various cache mapping techniques. There are few Cache Mapping techniques are available such as
Direct Mapping, Associative Mapping and Set Associative Mapping. Cache mapping defines how a block from the main memory is
mapped to the cache memory in case of a cache miss or cache mapping is a technique by which the contents of main memory are
brought into the cache memory.
Main memory is divided into equal size partitions called as blocks or frames. Cache memory is divided into partitions having same size
as that of blocks called as lines. During cache mapping, block of main memory is simply copied to the cache lines and the block is not
actually brought from the main memory.
Associative cache mapping is a type of Cache designed to solve the problem of cache contention that plagues the Direct Mapped
Cache. In a fully associative cache, a data block from any memory address may be stored into any cache line, and the whole address is
used as the cache Tag. Hence, when looking for a match, all the tags must be compared simultaneously with any requested address,
which demands expensive extra hardware. However, contention is avoided completely, as no block need ever be flushed unless the
whole cache is full, and then the least recently used may be chosen. There are few Cache Memory replacement algorithms are used to
replace a cache memory such as First in First Out (FIFO), Most Recently Used (MRU), Least Recently Used (LRU) and Randomly.
In associative mapping, the physical address is divided as follows;
•No. of Blocks in Main Memory = Main Memory Size/Block Size = 2^7/2^3 = 2^4 = 2^Block Index
•No. of Line is a Cache Memory = Cash Memory Size/Block Size= 2^5/2^3 = 2^2 = 2^Cache Index
•Tag Bits = Block Index
•No. of Cash Lines = No. of Comparators
•Size of Comparator = Size of Tag Bit
In an Associative Cache Mapping there is Tag search operation through the whole cache lines using comparators and
therefore there is a huge search overhead. In order to overcome that issue the Associative mapping technique has been
upgraded introducing Set Associative Mapping method. In Set associative mapping the cash memory lines are grouped in to
several sets and memory address has additional block mentioning Set Number.