Ddco With Answers
Ddco With Answers
MODULE-5
1. Write and explain the control sequence for execution of the instruction ADD
(R3), R1 on a single bus processor
OR
Write and explain the control sequence for execution of an unconditional
branch instruction
• Here the processor contain only a single bus for the movement of data,
address and instructions.
• ALU and all the registers are interconnected via a Single Common Bus
• Data & address lines of the external memory-bus is connected to the
internalprocessor-bus via MDR & MAR respectively.
EXECUTION OF A COMPLETE INSTRUCTION :
• Consider the instruction Add (R3),R1 which adds the contents of a memory-
locationpointed by R3 to register R1.
• Executing this instruction requires the following actions:
1) Fetch the instruction.
2) Fetch the first operand.
3) Perform the addition
4) Load the result into R1. THE fig gives the sequence of control steps required
to perform these operations for the single -bus architecture .
➢ Step1--> The instruction-fetch operation is initiated by loading contents of PC
into MAR & sending a Read request to memory. The Select signal is set to
Select4, which causes the Mux to select constant 4. This value is added to
operand at input B (PC‟s content), and the result is stored in Z.
➢ Step2--> Updated value in Z is moved to PC. This completes the PC increment
operation and PC will now point to next instruction.
➢ Step3--> Fetched instruction is moved into MDR and then to IR. The step 1
through 3constitutes the Fetch Phase. ➢ At the beginning of step 4, the
instruction decoder interprets the contents of the IR. This enables the control
circuitry to activate the control-signals for steps 4 through 7. The step 4 through
7 constitutes the Execution Phase.
➢ Step4--> Contents of R3 are loaded into MAR & a memory read signal is
issued
➢ Step5--> Contents of R1 are transferred to Y to prepare for addition.
➢Step6--> When Read operation is completed, memory-operand is available in
MDR,
➢ Step7--> Sum is stored in Z, then transferred to R1.The End signal causes a
new instruction fetch cycle to begin by returning to step1.
3. With a neat diagram explaining single bus processor data path wrt
organization of computer and fundamental concepts
SINGLE BUS ORGANIZATION:
• Here the processor contain only a single bus for the movement of data,
address and instructions.
• ALU and all the registers are interconnected via a Single Common Bus
• Data & address lines of the external memory-bus is connected to the
internalprocessor-bus via MDR & MAR respectively. (MDR -> Memory Data
Register, MAR -> Memory Address Register).
• MDR has 2 inputs and 2 outputs.
Data may be loaded → into MDR either from memory-bus (external) or → from
processor-bus (internal).
• MAR‟s input is connected to internal-bus; MAR‟s output is connected to
external- bus. (address sent from processor to memory only)
• Instruction Decoder & Control Unit is responsible for → Decoding the
instruction and issuing the control-signals to all the units inside the processor.
→ implementing the actions specified by the instruction (loaded in the IR).
• Processor Registers - Register R0 through R(n-1) are also called as
GeneralPurpose Register. The programmer can access these registers for
general-purpose use.
• Temporary Registers – There are 3 temporary registers in the processor.
Registers - Y, Z & Temp are used for temporary storage during program
execution. The programmer cannot access these 3 registers.
• In ALU,
1) “A‟ input gets the operand from the output of the multiplexer (MUX).
2) “B‟ input gets the operand directly from the processor-bus.
• There are 2 options provided for “A‟ input of the ALU.
• MUX is used to select one of the 2 inputs.
• MUX selects either → output of Y or → constant-value 4( which is used to
increment PC content).
• An instruction is executed by performing one or more of the following
operations:
1) Transfer a word of data from one register to another or to the ALU.
2) Perform arithmetic or a logic operation and store the result in a register.
3) Fetch the contents of a given memory-location and load them into a register.
4) Store a word of data from a register into a given memory-location.
• Disadvantage: Only one data-word can be transferred over the bus in a clock
cycle. Solution: Provide multiple internal-paths. Multiple paths allow several
data- transfers to take place in parallel.
6. Briefly explain fetching a word from memory and storing a word in memory
FETCHING A WORD FROM MEMORY
• To fetch instruction/data from memory, the processor has to specify the
address of the memory location where this information is stored and request a
Read operation.
• processor transfers required address to MAR .At the same time, processor
issues Read signal on control-lines of memory-bus.
• When requested-data are received from memory, they are stored in MDR.
From MDR, they are transferred to other registers in the processor.
Storing a Word in Memory
• Consider the instruction Move R2,(R1). This requires the following sequence:
1) R1out, MARin ; desired address is loaded into MAR.
2) R2out,MDRin ,Write ;data to be written are loaded into MDR & Write
command is issued.
3) MDRoutE, WMFC ;load data into memory-location pointed by R1 from MDR
7. Explain in detail (i) Hazards and its types with examples (ii) pipeline
performance (iii)role of cache in pipelining (iv)process of fetching a word from
memory with diagram
HAZARDS:
Any condition that causes the pipeline to stall is called a hazard.
1) A data hazard is any condition in which either the source or the destination operands of an instruction are not
available at the time expected in the pipeline. As a result some operation has to be delayed, and the pipeline stalls.
2) control hazards or instruction hazards: The pipeline may also be stalled because of a delay in the availability of
an instruction. For example, this may be a result of a miss in the cache .
3) A third type of hazard known as a structural hazard: This is the situation when two instructions require the use
of a given hardware resource at the same time.
If instructions and data reside in the same cache unit, only one instruction can
proceed and the other instruction is delayed. Many processors use separate
instruction and data caches to avoid this delay.
It is important to understand that pipelining does not result in individual
instructions being executed faster; rather, it is the throughput that increases,
where throughput is measured by the rate at which instruction execution is
completed. The pipeline stalls, causes degradation in pipeline performance. We
need to identify all hazards that may cause the pipeline to stall and to find ways
to minimize their impact.
3. A unit that completes its task early is idle for the remainder of the clock
period.
4. Hence, pipelining is most effective in improving performance if the tasks
being performed in different stages require about the same amount of
time. This consideration is particularly important for the instruction fetch
step, which is assigned one clock period.
5. The clock cycle has to be equal to or greater than the time needed to
complete a fetch operation. However, the access time of the main
memory may be as much as ten times greater than the time needed to
perform basic pipeline stage operations inside the processor, such as
adding two numbers.
Note that the Decode unit is idle in cycles 3 through 5, the Execute unit is
idle in cycles 4 through 6, and the Write unit is idle in cycles 5 through 7.
Such idle periods are called stalls. They are also often referred to as
bubbles in the pipeline.
Eg: If instructions and data reside in the same cache unit, only one
instruction can proceed and the other instruction is delayed. Many
processors use separate instruction and data caches to avoid this delay
MODULE-4
1. How the input and output are performed by the processor? Write a program
that reads the line or characters and displays
There are a number of input/output (I/O) devices, that can be connected to a
computer.
The input maybe from a keyboard, a sensor, switch, mouse etc. Similarly output
may be a speaker, monitor, printer, a digital display etc. These variety of I/O
devices exchange information in varied format, having different word length,
transfer speed is different, but are connected to the same system and exchange
information with the same computer. Computer must be capable of handling
these wide variety of devices.
ACCESSING I/O-DEVICES A single bus-structure can be used for connecting I/O-
devices to a computer. The simple arrangement of connecting set of I/O devices
to memory and processor by means of system bus is as shown in the figure.
Such an arrangement is called as Single Bus Organization.
• The system bus enables all the devices connected to it to involve in the data
transfer operation.
• The system bus establishes data communication between I/O device and
processor.
• Each I/O device is assigned a unique set of address.
• When processor places an address on address-lines, the intended device
responds to the command.
• The processor requests either a read or write-operation.
• The requested data are transferred over the data-lines
SIMULTANEOUS REQUESTS
DAISY CHAIN
• The daisy chain with multiple priority levels is as shown in the figure.
• The interrupt request line INTR is common to all devices as shown in the fig.
• The interrupt acknowledge line is connected in a daisy fashion as shown in the
figure.
• This signal propagates serially from one device to another device.
• The several devices raise an interrupt by activating INTR signal. In response to
the signal, processor transfers its device by activating INTA signal.
• This signal is received by device 1. The device-1 blocks the propagation of
INTA signal to device-2,when it needs processor service.
• The device-1 transfers the INTA signal to next device when it does not require
the processor service.
• In daisy chain arrangement device-1 has the highest priority.
Advantage: It requires fewer wires than the individual connection
ARRANGEMENT OF PRIORITY GROUPS
• In this technique, devices are organizes in a group and each group is
connected to the processor at a different priority level.
• With in a group devices are connected in a daisy chain fashion as shown in the
figure.
The following steps are necessary to transfer the bus mastership from CPU to
one of the DMA controller:
• The DMA controller request the processor to obtain the bus mastership by
activating BR (Bus request) signal
• In response to this signal the CPU transfers the bus mastership to requested
devices DMAcontroller1 in the form of BG (Bus grant).
• When the bus mastership is obtained from CPU the DMA controller1 blocks
the propagation of bus grant signal from one device to another device.
• The BG signal is connected to DMA controller2 from DMA controller1 in as
daisy fashion style is as shown in the figure.
• When the DMA controller1 has not sent BR request, it transfers the bus
mastership to DMAcontroller2 by unblocking bus grant signal.
• When the DMA controller1 receives the bus grant signal, it blocks the signal
from passing to DMA controller2 and enables BBSY signal. When BBSY signal is
set to 1 the set of devices connected to system bus doesn’t have any rights to
obtain the bus mastership from the CPU.
• If there are 128 blocks in a cache, the block-j of the main-memory maps onto
block-jmodulo-128 of the cache . When the memory-blocks 0, 128, & 256 are
loaded into cache, the block is stored in cache-block 0. Similarly, memoryblocks
1, 129, 257 are stored in cache-block 1.(eg:1mod 128=1, 129 mod 128=1)
• The contention may arise 1) Even when the cache is full. 2) But more than one
memory-block is mapped onto a given cache-block position.
• The contention is resolved by allowing the new blocks to overwrite the
currently resident-block. Memory-address determines placement of block in the
cache.
• Cache block number= (block-j of main memory)%128;
• The main memory block is loaded into cache block by means of memory
address. The main memory address consists of 3 fields as shown in the figure.
• Each block consists of 16 words. Hence least significant 4 bits are used to
select one of the 16 words.
• If there is no match, then the block containing the required word must first be
read from the main memory main memory block has to be placed in particular
cache block number by using below formula Cache block number=main memory
block number % number of blocks present in cache memory.
2. Associative Mapping:
• It is also called as associative mapped cache. It is much more flexible.
• In this technique main memory block can be placed into any cache block
position.
• In this case , 12 tag bits are required to identify a memory block when it is
resident of the cache memory.
• The Associative Mapping technique is illustrated as shown in the fig.
• In this technique 12 bits of address generated by the processor are compared
with the tag bits of each block of the cache to see if the desired block is present.
This is called as associative mapping technique.
3. Set Associative Mapping:
• It is the combination of direct and associative mapping techniques.
• The blocks of cache are divided into several groups. Such a groups are called
as sets.
• Each set consists of number of cache blocks. A memory block is loaded into
one of the cache sets.
• The main memory address consists of three fields, as shown in the figure.
• The lower 4 bits of memory address are used to select a word from a 16
words.
• A cache consists of 64 sets as shown in the figure. Hence 6 bit set field is used
to select a cache set from 64 sets.
• As there are 64 sets, the memory is divided into groups containing 64 blocks,
where each group is given a tag number
• The following figure clearly describes the working principle of Set Associative
Mapping technique.
• If the main-memory-block is updated by a source & if the block in the source
is already exists in the cache, then the valid-bit will be cleared to “0‟. • If
Processor & DMA uses the same copies of data then it is called as Cache
Coherence Problem.
MODULE-3
1. With a neat diagram explain the basic operational concepts of computer ,
explain the registers of processor and explain the sequence of steps
A computer consists of five functionally independent main parts:
input, memory, arithmetic and logic, output, and control units, as shown in
Figure 1.1.
The input unit accepts coded information from human operators using
devices such as keyboards, or from other computers over digital communication
lines.
The information received is stored in the computer‟s memory, either for
later use or to be processed immediately by the arithmetic and logic unit.
The processing steps are specified by a program that is also stored in the
memory.
Finally, the results are sent back to the outside world through the output unit.
All of these actions are coordinated by the control unit.
An interconnection network provides the means for the functional units to
exchange information and coordinate their actions.
BIG ENDIAN
LITTLE ENDIAN ASSIGNMENT.
Eg – store a word “JOHNSENA” in memory starting from word 1000, using Big Endian and Little endian.
MODULE-2
1. Design a combinational circuit to convert BCD to Excess-3 or can ask any
design.
Code Conversion (Convert BCD to Excess-3 Code)
• A code converter is a circuit that makes the two systems compatible even
though each uses a different binary code.
• Since each code uses four bits to represent a decimal digit, there must be four
input variables and four output variables. We designate the four input binary
variables by the symbols A, B, C, and D, and the four output variables by w, x, y ,
and z .
• ADD 3 to BCD to get Excess -3 Code
3. Explain carry look ahead adder circuit with block diagram
There are several techniques for reducing the carry propagation time in a
parallel adder. The most widely used technique employs the principle of carry
look ahead logic .
Carry Propagation
• Carry Propagation The addition of two binary numbers in parallel implies that
all the bits of the augend and addend are available for computation at the same
time.
• Consider the circuit of the full adder shown in Fig. 4.10 .
If we define two new binary variables.
• Gi is called a carry generate , and it produces a carry of 1 when both Ai and Bi
are 1, regardless of the input carry Ci
. • Pi is called a carry propagate , because it determines whether a carry into
stage i will propagate into stage i + 1
3b.What is a binary subtractor, explain
The addition and subtraction operations can be combined into one circuit with
one common binary adder by including an exclusive-OR gate with each full
adder.
A four-bit adder–subtractor circuit is shown in Fig. 4.13 . The mode input M
controls the operation. When M = 0, the circuit is an adder, and when M = 1, the
circuit becomes a subtractor.
Each exclusive-OR gate receives input M and one of the inputs of B. When M =
0, we have B ⊕0 = B. The full adders receive the value of B , the input carry is 0,
and the circuit performs A plus B . When M = 1, we have B ⊕ 1 = B’ and C0 = 1.
The B inputs are all complemented and a 1 is added through the input carry.
The circuit performs the operation A plus the 2’s complement of B . (The
exclusive-OR with output V is for detecting an overflow.)
Explain the full adder with the help of TT and Logic diagram
Full Adder
• A full adder is a combinational circuit that forms the arithmetic sum of three
bits. It consists of three inputs and two outputs.
• Two of the input variables, denoted by x and y , represent the two significant
bits to be added. The third input, z , represents the carry from the previous
lower significant position.
The two outputs are designated by the symbols S for sum and C for carry.
5. What is a decoder? Construct a two four line decoder with enable input
along with TT and Explain
DECODERS
• A Decoder is a combinational circuit that converts binary information from n
input lines to a maximum of 2𝑛 unique output lines.
• The decoders presented here are called n -to- m -line decoders, where m …
2𝑛 . Their purpose is to generate the 2𝑛 (or fewer) min terms of n input
variables.
• Each combination of inputs will assert a unique output. The name decoder is
also used in conjunction with other code converters, such as a BCD-to-seven-
segment decoder.
6. Define Multiplexer? Explain 8:1 mux with the help of Logic diagram and
logic expression, Also implement function F (A, B, C, D) = ϵm (1,2,5,6,9,12)
using 8:1 Mux
Multiplexer
• A multiplexer is a combinational circuit that selects binary information from
one of many input lines and directs it to a single output line
• The selection of a particular input line is controlled by a set of selection lines
• normally, there are 2𝑛 input lines and n selection lines whose bit
combinations determine which input is selected.
8. Explain the operation of JK, SR T and D flip flop and Positive edge triggered
D Flip flop with Characteristic table and truth table
CHARACTERISTIC TABLES:
9.Explain the working of SR Latch using NOR gates, show how SR Latch can be
used for switch debouncing
10. Write the verilog HDL code for (i)Two to one line MUX (ii)Two to four line
decoder (iii)Full adder using half adder module