CS3351 - DP & CO Notes Word
CS3351 - DP & CO Notes Word
COMBINATIONAL LOGIC
Combinational circuits-KMap-Analysis and Design Procedures-Binary Adder-Binary
Adder-Decimal Adder- Magnitude comparator-Decoder-Encoder-Multiplexers-
Demultiplexers
INTRODUCTION:
The digital system consists of two types of circuits, namely
(i) Combinational circuits
(ii Sequential circuits
)
Combinational circuit consists of logic gates whose output at any time is determined
from the present combination of inputs. The logic gate is the most basic building block
of combinational logic. The logical function performed by a combinational circuit is fully
defined by a set of Boolean expressions.
variables. The logic gates accept signals from inputs and output signals are
generated according to the logic circuits employed in it. Binary information from
the given data transforms to desired output data in this process. Both input and
output are obviously the binary signals, i.e., both the input and output signals are of
two possible states, logic 1 and logic 0.
1
Block diagram of a combinational logic circuit
DESIGN PROCEDURES:
Any combinational circuit can be designed by the following steps of design procedure.
The problem is stated.
Identify the input and output variables.
gates.
2
Problems:
1. Design a combinational circuit with three inputs and one output. The output is 1
when the binary value of the inputs is less than 3. The output is 0 otherwise.
Solution:
Truth Table:
x y z F
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 0
2. Design a combinational circuit with three inputs, x, y and z, and the three
outputs, A, B, and C. when the binary input is 0, 1, 2, or 3, the binary output is
3
one greater than the input. When the binary input is 4, 5, 6, or 7, the binary
output is one less than the input.
Solution:
Truth Table:
Derive the truth table that defines the required relationship between inputs and
outputs.
x y z A B C
0 0 0 0 0 1
0 0 1 0 1 0
0 1 0 0 1 1
0 1 1 1 0 0
1 0 0 0 1 1
1 0 1 1 0 0
1 1 0 1 0 1
1 1 1 1 1 0
Obtain the simplified Boolean functions for each output as a function of the input
K-map for output B:
4
The simplified expression from the map is: B= x’y’z+ x’yz’+ xy’z’+ xyz
Logic Diagram:
input variables have more 1’s than 0’s. The output is 0 otherwise. Design a 3-
input majority circuit.
Solution:
Truth Table:
x y z F
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
5
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
K-map Simplification:
Logic Diagram:
7
ARITHMETIC CIRCUITS:
In this section, we will discuss those combinational logic building blocks that
can be used to perform addition and subtraction operations on binary numbers.
Addition and subtraction are the two most commonly used arithmetic operations,
as the other two, namely multiplication and division, are respectively the processes
of repeated addition and repeated subtraction.
The basic building blocks that form the basis of all hardware used to perform
the arithmetic operations on binary numbers are half-adder, full adder, half-
subtractor, full-subtractor.
Half-Adder:
A half-adder is a combinational circuit that can be used to add two binary
bits. It has two inputs that represent the two bits to be added and two outputs, with
one producing the SUM output and the other producing the CARRY.
The truth table of a half-adder, showing all possible input combinations and
the corresponding outputs are shown below.
Truth Table:
Inputs Outputs
A B Sum (S) Carry (C)
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
K-map simplification:
8
The Boolean expressions for the SUM and CARRY outputs are given by the
equations,
Sum, S = A’B+ AB’= AB
Carry, C = A . B
The first one representing the SUM output is that of an EX-OR gate, the second one representing
The logic diagram of the half adder is,
Two of the input variables, represent the significant bits to be added. The
third input represents the carry from previous lower significant position. The block
diagram of full adder is given by,
9
The full adder circuit overcomes the limitation of the half-adder, which can
be used to add two bits only. As there are three input variables, eight different input
combinations are possible.
Truth Table:
Inputs Outputs
A B Cin Sum (S) Carry (Cout)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
10
The Boolean expressions for the SUM and CARRY outputs are given by the eqns.,
The logic diagram of the full adder can also be implemented with two half-
adders and one OR gate. The S output from the second half adder is the exclusive-
OR of Cin and the output of the first half-adder, giving
= B (A+Cin) + AB’Cin
= AB + BCin + AB’Cin
= AB + Cin (B + AB’)
= AB + Cin (A + B)
= AB+ ACin+ BCin.
11
Implementation of full adder with two half-adders and an OR gate
Half -Subtractor
A half-subtractor is a combinational circuit that can be used to subtract one
binary digit from another to produce a DIFFERENCE output and a BORROW output.
The BORROW output here specifies whether a ‘1’ has been borrowed to perform the
subtraction.
Inputs Outputs
A B Difference (D) Borrow (Bout)
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
K-map simplification:
The Boolean expressions for the DIFFERENCE and BORROW outputs are
given by the equations,
12
Difference, D = A’B+ AB’= A B
Borrow, Bout = A’ . B
The first one representing the DIFFERENCE (D)output is that of an exclusive-
OR gate, the expression for the BORROW output (Bout) is that of an AND gate with
input A complemented before it is fed to the gate.
The logic diagram of the half adder is,
Full Subtractor:
A full subtractor performs subtraction operation on two bits, a minuend
and a subtrahend, and also takes into consideration whether a ‘1’ has already been
borrowed by the previous adjacent lower minuend bit or not.
As a result, there are three bits to be handled at the input of a full subtractor,
namely the two bits to be subtracted and a borrow bit designated as B in. There are
two outputs, namely the DIFFERENCE output D and the BORROW output Bo. The
BORROW output bit tells whether the minuend bit needs to borrow a ‘1’ from the
next possible higher minuend bit.
K-map simplification:
14
Implementation of full- subtractor
The logic diagram of the full-subtractor can also be implemented with two
half-subtractors and one OR gate. The difference,D output from the second half
subtractor is the exclusive-OR of Bin and the output of the first half-subtractor, giving
15
Implementation of full-subtractor with two half-subtractors and an OR gate
3 2 1 03 2 1 0
The bits are added with full adders, starting from the least significant
position, to form the sum it and carry bit. The input carry C 0 in the least significant
position must be 0. The carry output of the lower order stage is connected to the
carry input
of the next higher order stage. Hence this type of adder is called ripple-carry adder.
16
In the least significant stage, A0, B0 and C0 (which is 0) are added resulting in
sum S0 and carry C1. This carry C1 becomes the carry input to the second stage.
Similarly in the second stage, A1, B1 and C1 are added resulting in sum S1 and carry
C2, in the third stage, A2, B2 and C2 are added resulting in sum S2 and carry C3, in the
third stage, A3, B3 and C3 are added resulting in sum S3 and C4, which is the output
carry. Thus the circuit results in a sum (S3 S2 S1 S0) and a carry output (Cout).
Though the parallel binary adder is said to generate its output immediately
after the inputs are applied, its speed of operation is limited by the carry
propagation delay through all stages. However, there are several methods to reduce
this delay.
One of the methods of speeding up this process is look-ahead carry addition
which eliminates the ripple-carry delay.
For example, addition of two numbers (1111+ 1011) gives the result as 1010.
Addition of the LSB position produces a carry into the second position. This carry
when added to the bits of the second position, produces a carry into the third
position. This carry when added to bits of the third position, produces a carry into
the last position. The sum bit generated in the last position (MSB) depends on the
carry that was generated by the addition in the previous position. i.e., the adder will
not produce correct result until LSB carry has propagated through the intermediate full-
adders. This represents a time delay that depends on the propagation delay
17
produced in an each full-adder. For example, if each full adder is considered to have
a propagation delay of 8nsec, then S3 will not react its correct value until 24 nsec
after LSB is generated. Therefore total time required to perform addition is 24+ 8 =
32nsec.
The method of speeding up this process by eliminating inter stage carry delay is called look ahead
Consider the circuit of the full-adder shown above. Here we define two
21
The addition and subtraction operation can be combined into one circuit
with one common binary adder. This is done by including an exclusive-OR gate with
each full adder. A 4-bit adder Subtractor circuit is shown below.
The mode input M controls the operation. When M= 0, the circuit is an adder
and when M=1, the circuit becomes a Subtractor. Each exclusive-OR gate receives
input M and one of the inputs of B. When M=0, we have B 0= B. The full adders
receive the value of B, the input carry is 0, and the circuit performs A plus B. When
M=1, we have B 1= B’ and C0=1. The B inputs are all complemented and a 1 is
added through the input carry. The circuit performs the operation A plus the 2’s
complement of B. The exclusive-OR with output V is for detecting an overflow.
22
of the 4-bit binary adder. The output sum of the two decimal digits must be
represented in BCD.
To implement BCD adder:
For initial addition , a 4-bit binary adder is required,
Combinational circuit to detect whether the sum is greater than 9 and
One more 4-bit adder to add 6 (0110)2 with the sum of the first 4-bit adder, if
the sum is greater than 9 or carry is 1.
The logic circuit to detect sum greater than 9 can be determined by
simplifying the Boolean expression of the given truth table.
top4-bit binary adder to provide the binary sum. When the output carry is equal to
zero, nothing is added to the binary sum. When it is equal to one, binary (0110)2 is
added to the binary sum through the bottom 4-bit adder.
23
The output carry generated from the bottom adder can be ignored, since it
supplies information already available at the output carry terminal. The output carry
from one stage must be connected to the input carry of the next higher-order stage.
MULTIPLEXER: (Data Selector)
A Multiplexer or MUX, is a combinational circuit with more than one
input line, one output line and more than one selection line. A multiplexer selects
binary information present from one of many input lines, depending upon the logic
status of the selection inputs, and routes it to the output line. Normally, there are 2 n
input lines and n selection lines whose bit combinations determine which input is
selected. The multiplexer is often labeled as MUX in block diagrams.
A multiplexer is also called a data selector, since it selects one of many
inputs and steers the binary information to the output line.
24
2-to-1- line Multiplexer:
The circuit has two data input lines, one output line and one selection line, S.
When S= 0, the upper AND gate is enabled and I0 has a path to the output.
When S=1, the lower AND gate is enabled and I1 has a path to the output.
Logic diagram
The multiplexer acts like an electronic switch that selects one of the two sources.
Truth table:
S Y
0 I0
1 I1
Selection lines S1 and S0 are decoded to select a particular AND gate. The outputs of
the AND gate are applied to a single OR gate that provides the 1-line output.
25
4-to-1-Line Multiplexer
Function table:
S1 S0 Y
0 0 I0
0 1 I1
1 0 I2
1 1 I3
operation.
Although the circuit contains four 2-to-1-Line multiplexers, it is viewed as a
circuit that selects one of two 4-bit sets of data lines. The unit is enabled when E= 0.
Then if S= 0, the four A inputs have a path to the four outputs. On the other hand, if
S=1, the four B inputs are applied to the outputs. The outputs have all 0’s when E=
1, regardless of the value of S.
Application:
27
1. They are used as a data selector to select out of many data inputs.
2. They can be used to implement combinational logic circuit.
3. They are used in time multiplexing systems.
4. They are used in frequency multiplexing systems.
5. They are used in A/D and D/A converter.
6. They are used in data acquisition systems.
1. If both the minterms in the column are not circled, apply 0 to the corresponding
input.
2. If both the minterms in the column are circled, apply 1 to the corresponding
input.
3. If the bottom minterm is circled and the top is not circled, apply C to the input.
4. If the top minterm is circled and the bottom is not circled, apply C’ to the input.
28
Multiplexer Implementation:
Multiplexer Implementation:
29
3. F ( A, B, C) = ∑m (1, 2, 4, 5)
Solution:
Variables, n= 3 (A, B, C) Select lines= n-1 = 2 (S1, S0)
2n-1 to MUX i.e., 22 to 1 = 4 to 1 MUX
Multiplexer Implementation:
5. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (A, B, C, D) = ∑m (0, 1, 2, 4, 6, 9, 12, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1,
S0)
31
Multiplexer Implementation (Using 8: 1 MUX)
Multiplexer Implementation:
Multiplexer Implementation:
Solution:
= AB’D (C’+C) + A’C’D (B’+B) + B’CD’ (A’+A) + AC’D (B’+B)
= AB’C’D + AB’CD+ A’B’C’D + A’BC’D +A’B’CD’ + AB’CD’ +AB’C’D+
ABC’D
= AB’C’D + AB’CD+ A’B’C’D + A’BC’D +A’B’CD’ + AB’CD’+ ABC’D
= m9+ m11+ m1+ m5+ m2+ m10+ m13
= ∑m (1, 2, 5, 9, 10, 11, 13).
Implementation Table:
34
Multiplexer Implementation:
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation table:
Multiplexer Implementation:
DEMULTIPLEXER
38
The block diagram of a demultiplexer which is opposite to a multiplexer in its
operation is shown above. The circuit has one input signal, ‘n’ select signals and 2 n
output signals. The select inputs determine to which output the data input will be
connected. As the serial data is changed to parallel data, i.e., the input caused to
appear on one of the n output lines, the demultiplexer is also called a “data
distributer” or a “serial-to-parallel converter”.
Logic Symbol
The input variable Din has a path to all four outputs, but the input
information is directed to only one of the output lines. The truth table of the 1-to-
4 demultiplexer is shown below.
Enable S1 S0 Din Y0 Y1 Y2 Y3
0 x x x 0 0 0 0
1 0 0 0 0 0 0 0
1 0 0 1 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 1 0 1 0 0
1 1 0 0 0 0 0 0
1 1 0 1 0 0 1 0
1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 1
Truth table of 1-to-4 demultiplexer
39
From the truth table, it is clear that the data input, Din is connected to the
output Y0, when S1= 0 and S0= 0 and the data input is connected to output Y1 when
S1= 0 and S0= 1. Similarly, the data input is connected to output Y2 and Y3 when S1= 1
and S0= 0 and when S1= 1 and S0= 1, respectively. Also, from the truth table, the
expression for outputs can be written as follows,
Y0= S1’S0’Din
Y1= S1’S0Din
Y2= S1S0’Din
Y3= S1S0Din
using four 3-input AND gates and two NOT gates. Here, the input data line Din, is
connected to all the AND gates. The two select lines S1, S0 enable only one gate at a
time and the data that appears on the input line passes through the selected gate to
the associated output line.
40
1-to-8 Demultiplexer
A 1-to-8 demultiplexer has a single input, Din, eight outputs (Y0 to Y7) and
three select inputs (S2, S1 and S0). It distributes one input line to eight output lines
based on the select inputs. The truth table of 1-to-8 demultiplexer is shown below.
Din S2 S1 S0 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0
0 x x x 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1
1 0 0 1 0 0 0 0 0 0 1 0
1 0 1 0 0 0 0 0 0 1 0 0
1 0 1 1 0 0 0 0 1 0 0 0
1 1 0 0 0 0 0 1 0 0 0 0
1 1 0 1 0 0 1 0 0 0 0 0
1 1 1 0 0 1 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0 0 0 0
Truth table of 1-to-8 demultiplexer
From the above truth table, it is clear that the data input is connected with one
eight AND gates, but only one of the eight AND gates will be enabled by the select
input lines. For example, if S2S1S0= 000, then only AND gate-0 will be enabled and
thereby the data input, Din will appear at Y0. Similarly, the different combinations of
the select inputs, the input Din will appear at the respective output.
41
Logic diagram of 1-to-8 demultiplexer
42
2. Implement full subtractor using demultiplexer.
Inputs Outputs
A B Bin Difference(D) Borrow(Bout)
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1
43
DECODERS
44
Here the 2 inputs are decoded into 4 outputs, each output representing one of
the minterms of the two input variables.
Inputs Outputs
Enable A B Y3 Y2 Y1 Y0
0 x x 0 0 0 0
1 0 0 0 0 0 1
1 0 1 0 0 1 0
1 1 0 0 1 0 0
1 1 1 1 0 0 0
As shown in the truth table, if enable input is 1 (EN= 1) only one of the
outputs (Y0 – Y3), is active for a given input.
45
in BCD and generates the corresponding seven-segment code.
46
Digit Display Segments Activated
0 a, b, c, d, e, f
1 b, c
2 a, b, d, e, g
3 a, b, c, d, g
4 b, c, f, g
5 a, c, d, f, g
6 a, c, d, e, f, g
7 a, b, c
8 a, b, c, d, e, f, g
9 a, b, c, d, f, g
47
Truth table:
BCD code 7-Segment code
Digit
A B C D a b c d e f g
0 0 0 0 0 1 1 1 1 1 1 0
1 0 0 0 1 0 1 1 0 0 0 0
2 0 0 1 0 1 1 0 1 1 0 1
3 0 0 1 1 1 1 1 1 0 0 1
4 0 1 0 0 0 1 1 0 0 1 1
5 0 1 0 1 1 0 1 1 0 1 1
6 0 1 1 0 1 0 1 1 1 1 1
7 0 1 1 1 1 1 1 0 0 0 0
8 1 0 0 0 1 1 1 1 1 1 1
9 1 0 0 1 1 1 1 1 0 1 1
K-map
48
49
Logic Diagram
50
ENCODERS
An encoder is a digital circuit that performs the inverse operation of a
decoder. Hence, the opposite of the decoding process is called encoding. An encoder
is a combinational circuit that converts binary information from 2 n input lines to a
maximum of ‘n’ unique output lines.
The general structure of encoder circuit is
It has 2n input lines, only one which 1 is active at any time and ‘n’ output lines.
It encodes one of the active inputs to a coded binary output with ‘n’ bits. In an
encoder, the number of outputs is less than the number of inputs.
Octal-to-Binary Encoder
It has eight inputs (one for each of the octal digits) and the three outputs that
generate the corresponding binary number. It is assumed that only one input has a
value of 1 at any given time.
The encoder can be implemented with OR gates whose inputs are
determined directly from the truth table. Output z is equal to 1, when the input octal
digit is 1 or 3 or 5 or 7. Output y is 1 for octal digits 2, 3, 6, or 7 and the output is 1
for digits 4, 5,
6 or 7.
Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 A B C
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1
51
These conditions can be expressed by the following output Boolean functions:
z= D1+ D3+ D5+ D7
y= D2+ D3+ D6+ D7
x= D4+ D5+ D6+ D7
The encoder can be implemented with three OR gates. The encoder defined in
the below table, has the limitation that only one input can be active at any given
time. If two inputs are active simultaneously, output produces an undefined
combination.
For eg., if D3 and D6 are 1 simultaneously, the output of the encoder may be
111. This does not represent either D 6 or D3. To resolve this problem, encoder circuits
must establish an input priority to ensure that only one input is encoded. If we
establish a higher priority for inputs with higher subscript numbers and if D3 and D6
are 1 at the same time, the output will be 110 because D6 has higher priority than D3.
Priority Encoder
A priority encoder is an encoder circuit that includes the priority function. In
priority encoder, if two or more inputs are equal to 1 at the same time, the input
having the highest priority will take precedence.
In addition to the two outputs x and y, the circuit has a third output, V (valid
bit indicator). It is set to 1 when one or more inputs are equal to 1. If all inputs are 0,
there is no valid input and V is equal to 0.
52
The higher the subscript number, higher the priority of the input. Input D3,
has the highest priority. So, regardless of the values of the other inputs, when D3 is 1,
the output for xy is 11.
D2 has the next priority level. The output is 10, if D 2= 1 provided D3= 0. The
output for D1 is generated only if higher priority inputs are 0, and so on down the
priority levels.
Truth table:
Inputs Outputs
D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
x 1 0 0 0 1 1
x x 1 0 1 0 1
x x x 1 1 1 1
Although the above table has only five rows, when each don’t care condition
is replaced first by 0 and then by 1, we obtain all 16 possible input combinations.
For example, the third row in the table with X100 represents minterms 0100 and
1100. The don’t care condition is replaced by 0 and 1 as shown in the table below.
53
K-map Simplification:
54
MAGNITUDE COMPARATOR
A magnitude comparator is a combinational circuit that compares
two given numbers (A and B) and determines whether one is equal to, less than or
greater than the other. The output is in the form of three binary variables
representing the conditions A = B, A>B and A<B, if A and B are the two numbers
being compared.
For comparison of two n-bit numbers, the classical method to achieve the
Boolean expressions requires a truth table of 22n entries and becomes too lengthy and
cumbersome.
55
K-map Simplification:
56
Logic Diagram:
Let us consider the two binary numbers A and B with four digits each. Write
= B1 and A0 = B0. When the numbers are binary they possess the value of either 1 or
0, the equality relation of each pair can be expressed logically by the equivalence
function as,
57
Xi = AiBi + Ai′Bi′ for i = 1, 2, 3, 4.
Or, Xi = (A B)′. or, Xi ′ = A B
Or, Xi = (AiBi′ + Ai′Bi)′.
where, Xi =1 only if the pair of bits in position i are equal (ie., if both are 1 or both
are 0).
To satisfy the equality condition of two numbers A and B, it is necessary that
all Xi must be equal to logic 1. This indicates the AND operation of all Xi variables.
In other words, we can write the Boolean expression for two equal 4-bit numbers.
(A = B) = X3X2X1 X0.
The binary variable (A=B) is equal to 1 only if all pairs of digits of the two numbers
are equal.
To determine if A is greater than or less than B, we inspect the relative magnitudes of pairs of significant
The gate implementation of the three output variables just derived is simpler
than it seems because it involves a certain amount of repetition. The unequal
outputs can use the same gates that are needed to generate the equal output. The
logic diagram of the 4-bit magnitude comparator is shown below,
58
59
www.Notesfree.in
UNIT-II
SYNCHRONOUS SEQUENTIAL
LOGIC
INTRODUCTION
In combinational logic circuits, the outputs at any instant of time depend
only on the input signals present at that time. For any change in input, the output occurs imme
The information stored in the memory elements at any given time defines the
present state of the sequential circuit. The present state and the external circuit
determine the output and the next state of sequential circuits.
www.Notesfree.in
www.Notesfree.in
Thus in sequential circuits, the output variables depend not only on the
1
www.Notesfree.in
www.Notesfree.in
present input variables but also on the past history of input variables.
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
TRIGGERING OF FLIP-FLOPS
The state of a Flip-Flop is switched by a momentary change in the input signal. This momentary change is
www.Notesfree.in
www.Notesfree.in
Flip-Flops are synchronous bistable devices (has two outputs Q and Q’). In
this case, the term synchronous means that the output changes state only at a
specified point on the triggering input called the clock (CLK), i.e., changes in the
output occur in synchronization with the clock.
An edge-triggered Flip-Flop changes state either at the positive edge
(rising edge) or at the negative edge (falling edge) of the clock pulse and is sensitive
to its inputs only at this transition of the clock. The different types of edge-
triggered Flip-
Flops are—
S-R Flip-Flop (Set – Reset) J-K Flip-Flop (Jack Kilby) D Flip-Flop (Delay)
T Flip-Flop (Toggle)
Although the S-R Flip-Flop is not available in IC form, it is the basis for the D
data on these inputs are transferred to the Flip-Flop's output only on the triggering
edge of the clock pulse. The circuit is similar to SR latch except enable signal is
replaced by clock pulse (CLK). On the positive edge of the clock pulse, the circuit
responds to the S and R inputs.
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
SR Flip-Flop
When S is HIGH and R is LOW, the Q output goes HIGH on the triggering
edge of the clock pulse, and the Flip-Flop is SET. When S is LOW and R is HIGH, the
Q output goes LOW on the triggering edge of the clock pulse, and the Flip-Flop is
RESET. When both S and R are LOW, the output does not change from its prior state.
An invalid condition exists when both S and R are HIGH.
www.Notesfree.in
www.Notesfree.in
S R Qn Qn+1
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 x
1 1 1 x
Characteristic table
K-map Simplification:
JK Flip Flop
www.Notesfree.in
www.Notesfree.in
The data input J and the output Q’ are applied o the first AND gate and its
output (JQ’) is applied to the S input of SR Flip-Flop. Similarly, the data input K and
the output Q are applied to the second AND gate and its output (KQ) is applied to
the R input of SR Flip-Flop.
J= K= 0
When J=K= 0, both AND gates are disabled. Therefore clock pulse have no effect, hence the Flip-Flop o
J= 0, K= 1
When J= 0 and K= 1, AND gate 1 is disabled i.e., S= 0 and R= 1. This condition
Truth table:
Inputs Output
CLK State
J K Qn+1
1 0 0 Qn No Change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn’ Toggle
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
Qn J K Qn+1
0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0
Characteristic equation:
Qn+1= JQ’+ K’Q
www.Notesfree.in
www.Notesfree.in
D Flip-Flop:
Like in D latch, in D Flip-Flop the basic SR Flip-Flop is used with
complemented inputs. The D Flip-Flop is similar to D-latch except clock pulse is
used instead of enable input.
Clock D Qn+1 State
1 0 0 Reset
1 1 1 Set
0 x Qn No Change
D Flip-Flop
n the RS Flip- Flop is to ensure that inputs S and R are never equal to 1 at the same time. Thisis done by D Flip-
www.Notesfree.in
www.Notesfree.in
Looking at the truth table for D Flip-Flop we can realize that Qn+1 function
follows the D input at the positive going edges of the clock pulses.
Qn D Qn+1
0 0 0
0 1 1
1 0 0
1 1 1
Characteristic equation:
Qn+1= D.
10
www.Notesfree.in
www.Notesfree.in
T Flip-Flop
The T (Toggle) Flip-Flop is a modification of the JK Flip-Flop. It is obtained
from JK Flip-Flop by connecting both inputs J and K together, i.e., single input.
Regardless of the present state, the Flip-Flop complements its output when the
clock pulse occurs while input T= 1.
T Qn+1 State
0 Qn No Change
1 Qn’ Toggle
T Flip-Flop
When T= 0, Qn+1= Qn, ie., the next state is the same as the present state and no
11
www.Notesfree.in
www.Notesfree.in
Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0
K-map Simplification:
Characteristic equation:
Qn+1= TQn’+ T’Qn
Master-Slave JK Flip-Flop
A master-slave Flip-Flop consists of clocked JK flip-flop as a master and
clocked SR flip-flop as a slave. The output of the master flip-flop is fed as an input to
the slave flip-flop. Clock signal is connected directly to the master flip-flop, but is
connected through inverter to the slave flip-flop. Therefore, the information present
at the J and K inputs is transmitted to the output of master flip-flop on the positive
clock pulse and it is held there until the negative clock pulse occurs, after which it is
allowed to pass through to the output of slave flip-flop. The output of the slave flip-
flop is connected as a third input of the master JK flip-flop.
Logic diagram
When J= 1 and K= 0, the master sets on the positive clock. The high Y
output of the master drives the S input of the slave, so at negative clock, slave sets,
copying the action of the master.
12
www.Notesfree.in
www.Notesfree.in
When J= 0 and K= 1, the master resets on the positive clock. The high Y’
output of the master goes to the R input of the slave. Therefore, at the negative clock
slave resets, again copying the action of the master.
When J= 1 and K= 1, master toggles on the positive clock and the output of
master is copied by the slave on the negative clock. At this instant, feedback inputs
to the master flip-flop are complemented, but as it is negative half of the clock pulse,
master flip-flop is inactive. This prevents race around condition.
The clocked master-slave J-K Flip-Flop using NAND gate is shown below.
13
www.Notesfree.in
www.Notesfree.in
SR Flip- Flop:
Present
Inputs Next State Present Next
State Inputs Inputs
Present Next State
Qn S R Qn+1 Inputs State
State State Qn Qn+1 S R S R
0 0 0 Q 0 Qn+1
n S0 R 0 0 0
0 0 1 0 0 x
0 0 00 x 0 0 1
0 1 0 1
0 1 1 0
0 1 1 0 1 0
0 1 1 x
1 0 0 1
1 0 0 1 0 1
1 0 0 1
1 1 x 0
1 1 0 0
1 0 1 0 Excitation Table x 0
1 1 1 0
1 1 0 1
1 1 1 x
14
www.Notesfree.in
www.Notesfree.in
The above table presents the excitation table for SR Flip-Flop. It consists of
present state (Qn), next state (Qn+1) and a column for each input to show how the
required transition is achieved.
There are 4 possible transitions from present state to next state. The required
Input conditions for each of the four transitions are derived from the information
available in the characteristic table. The symbol ‘x’ denotes the don’t care condition;
it does not matter whether the input is 0 or 1.
JK Flip-Flop:
www.Notesfree.in
www.Notesfree.in
D Flip-Flop:
SR Flip-Flop to D Flip-Flop
SR Flip-Flop to JK Flip-
Flop SR Flip-Flop to T Flip-
Flop JK Flip-Flop to T Flip-
Flop JK Flip-Flop to D Flip-
Flop D Flip-Flop to T Flip-
Flop T Flip-Flop to D Flip-
Flop
16
www.Notesfree.in
www.Notesfree.in
SR Flip-Flop to D Flip-Flop:
Write the characteristic table for required Flip-Flop (D Flip-Flop).
Write the excitation table for given Flip-Flop (SR Flip-Flop).
Determine the expression for given Flip-Flop inputs (S & R) by using K- map.
Draw the Flip-Flop conversion logic diagram to obtain the required flip- flop
(D Flip-Flop) by using the above obtained expression.
17
www.Notesfree.in
www.Notesfree.in
SR to JK Flip-Flop
SR Flip-Flop to T Flip-Flop
Flip-Flop
Input Present state Next state
Inputs
T Qn Qn+1 S R
0 0 0 0 x
0 1 1 x 0
1 0 1 1 0
1 1 0 0 1
SR to T Flip-Flop
18
www.Notesfree.in
www.Notesfree.in
JK Flip-Flop to T Flip-Flop
The excitation table for the above conversion is
JK to T Flip-Flop
JK to D Flip-Flop
19
www.Notesfree.in
www.Notesfree.in
D Flip-Flop to T Flip-Flop
The excitation table for the above conversion is
D to T Flip-Flop
Flip-Flop
Input Present state Next state
Input
D Qn Qn+1 T
0 0 0 0
0 1 0 1
1 0 1 1
1 1 1 0
T to D Flip-Flop
20
www.Notesfree.in
www.Notesfree.in
Moore model:
In the Moore model, the outputs are a function of the present state of the Flip-
Flops only. The output depends only on present state of Flip-Flops, it appears only after the clock p
Mealy model
21
www.Notesfree.in
www.Notesfree.in
ANALYSIS PROCEDURE:
The synchronous sequential circuit analysis is summarizes as given below:
Assign a state variable to each Flip-Flop in the synchronous sequential circuit.
1. A sequential circuit has two JK Flip-Flops A and B, one input (x) and one output
22
www.Notesfree.in
www.Notesfree.in
Logic Diagram:
State table:
23
www.Notesfree.in
www.Notesfree.in
State Diagram:
State Diagram
A sequential circuit with two ‘D’ Flip-Flops A and B, one input (x) and one output (y). The Flip-Flop inpu
DA= Ax+ Bx DB= A’x
and the circuit output function is, Y= (A+ B) x’.
24
www.Notesfree.in
www.Notesfree.in
State Table:
A B A B A B Y Y
0 0 0 0 0 1 0 0
0 1 0 0 1 1 1 0
1 0 0 0 1 0 1 0
1 1 0 0 1 0 1 0
3. A sequential circuit has two JK Flip-Flop A and B. the Flip-Flop input functions
are: JA= B JB= x’
KA= Bx’ KB= A x.
25
www.Notesfree.in
www.Notesfree.in
Logic diagram:
The output function is not given in the problem. The output of the Flip-Flops
may be considered as the output of the circuit.
State table:
Prese
Input Flip-Flop Inputs Next state
nt state
A B x JA= B KA= Bx’ JB= x’ KB= Ax A(t+1) B(t+1)
0 0 0 0 0 1 0 0 1
0 0 1 0 0 0 1 0 0
0 1 0 1 1 1 0 1 1
0 1 1 1 0 0 1 1 0
1 0 0 0 0 1 1 1 1
1 0 1 0 0 0 0 1 0
1 1 0 1 1 1 1 0 0
1 1 1 1 0 0 0 1 1
Next state
Present state
X= 0 X= 1
A B A B A B
0 0 0 1 0 0
0 1 1 1 1 0
1 0 1 1 1 0
1 1 0 0 1 1
26
www.Notesfree.in
www.Notesfree.in
State Diagram:
4. A sequential circuit has two JK Flop-Flops A and B, two inputs x and y and
one output z. The Flip-Flop input equation and circuit output equations are
JA = Bx + B' y' KA = B' xy'
JB = A' x KB = A+ xy'
z = Ax' y' + Bx' y'
27
www.Notesfree.in
www.Notesfree.in
State table:
To obtain the next-state values of a sequential circuit with JK Flip-Flop, use
the JK Flip-Flop characteristic table,
State
28
www.Notesfree.in
www.Notesfree.in
5. Analyze the synchronous Mealy machine and obtain its state diagram.
Soln:
The given synchronous Mealy machine consists of two D Flip-Flops, one inputs and one output. The
29
www.Notesfree.in
www.Notesfree.in
Y1 Y2 Y1 Y2 Y1 Y2 Z Z
0 0 0 0 0 1 0 0
0 1 1 1 0 1 0 0
1 0 0 0 0 1 0 0
1 1 0 0 0 1 0 1
Second form of state table
State Diagram:
30
www.Notesfree.in
www.Notesfree.in
Soln:
Using the assigned variable Y1 and Y2 for the two JK Flip-Flops, we can write
the four excitation input equations and the Moore output equation as follows:
JA= Y2X ; KA= Y2’
State Diagram:
Here the output depends on the present state only and is independent of
the input. The two values inside each circle separated by a slash are for the
present state and output.
31
www.Notesfree.in
www.Notesfree.in
7. A sequential circuit has two T Flip-Flop A and B. The Flip-Flop input functions
are:
TA= Bx TB= x
y= AB
(a) Draw the logic diagram of the circuit,
(b) Tabulate the state table,
(c) Draw the state diagram.
Soln:
Logic diagram:
State state
Present Input Flip-Flop Inputs Next state Output
32
www.Notesfree.in
www.Notesfree.in
A B A B A B y y
0 0 0 0 0 1 0 0
0 1 0 1 1 0 0 0
1 0 1 0 1 1 0 0
1 1 1 1 0 0 1 1
Second form of state table
State Diagram:
The two states are said to be redundant or equivalent, if every possible set of
inputs generate exactly same output and same next state. When two states are
equivalent, one of them can be removed without altering the input-output
relationship.
Since ‘n’ Flip-Flops produced 2n state, a reduction in the number of states
may result in a reduction in the number of Flip-Flops.
The need for state reduction or state minimization is explained with one
example.
33
www.Notesfree.in
www.Notesfree.in
Examples:
1. Reduce the number of states in the following state diagram and draw the
reduced state diagram.
State diagram
Step 1: Determine the state table for given state diagram
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
a b c 0 0
b d e 1 0
c c d 0 1
d a d 0 0
e c d 0 1
have outputs 0 and 1 for x=0 and x=1 respectively. Therefore state e can be removed
34
www.Notesfree.in
www.Notesfree.in
2. Reduce the number of states in the following state table and tabulate the reduced
state table.
and replaced by e.
The reduced state table-1 is shown below.
35
www.Notesfree.in
www.Notesfree.in
Now states d and f are equivalent. Both states go to the same next state (e, f)
and have same output (0, 1). Therefore one state can be removed; f is replaced
by d.
The final reduced state table-2 is shown below.
36
www.Notesfree.in
www.Notesfree.in
Similarly, 3 and 6 generate exactly same next state and same output for
every possible set of inputs. The state 3 and 6 go to next states 4 and 5 and have
outputs 0 and 0 for x=0 and x=1 respectively. Therefore state 6 can be removed
and replaced by 3. The final reduced state table is shown below.
37
www.Notesfree.in
www.Notesfree.in
From the above state table, A and D generate exactly same next state and
same output for every possible set of inputs. The state A and D go to next states D
and C and have outputs 0 and 1 for x=0 and x=1 respectively. Therefore state D can
be removed and replaced by A. Similarly, C and F generate exactly same next state
and same output for every possible set of inputs. The state C and F go to next states
H and D and have outputs 1 and 1 for x=0 and x=1 respectively. Therefore state F
can be removed and replaced by C.
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
A A C 0 1
B E A 1 1
C H A 1 1
E B G 0 1
G A C 0 1
H C A 0 1
I G H 1 1
Reduced state
table-1
From the above reduced state table-1, A and G generate exactly same next
state and same output for every possible set of inputs. The state A and G go to next
states A and C and have outputs 0 and 1 for x=0 and x=1 respectively. Therefore
state G can be removed and replaced by A.
The final reduced state table-2Next
is shown
state below. Output
Present state
X= 0 X= 1 X= 0 X= 1
A A C 0 1
B E A 1 1
C H A 1 1
E B A 0 1
H C A 0 1
I A H 1 1
Reduced state table-2
38
www.Notesfree.in
www.Notesfree.in
Soln:
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e f 0 1
e a f 0 1
f g f 0 1
g a f 0 1
Now states d and f are equivalent. Both states go to the same next state (e, f)
and have same output (0, 1). Therefore one state can be removed; f is replaced by d.
39
www.Notesfree.in
www.Notesfree.in
combinational gates. The design of circuit consists of choosing the Flip-Flops and
then finding a combinational gate structure together with the Flip-Flops. The
number of Flip-Flops is determined from the number of states needed in the circuit.
The combinational circuit is derived from the state table.
Design procedure:
1. The given problem is determined with a state diagram.
2. From the state diagram, obtain the state table.
3. The number of states may be reduced by state reduction methods (if applicable).
40
www.Notesfree.in
www.Notesfree.in
41
www.Notesfree.in
www.Notesfree.in
Present Next
Inputs
State State
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Present Next
Input
State State
Qn Qn+1 T
0 0 0
0 1 1
1 0 1
1 1 0
Present Next
Input
State State
Qn Qn+1 D
0 0 0
0 1 1
1 0 0
1 1 1
Excitation table for D Flip-Flop
42
www.Notesfree.in
www.Notesfree.in
Problems
1. Design a clocked sequential machine using JK Flip-Flops for the state
diagram shown in the figure. Use state reduction if possible. Make proper state
assignment.
Soln:
State Table:
Binary Assignment:
Now each state is assigned with binary values. Since there are three states,
number of Flip-Flops required is two and 2 binary numbers are assigned to the
states. a= 00; b= 0; and c= 10.
www.Notesfree.in
www.Notesfree.in
43
www.Notesfree.in
www.Notesfree.in
Excitation Table:
Present
Input Next state Flip-Flop Inputs Output
state
X A B A B JA KA JB KB Y
0 0 0 0 0 0 x 0 x 0
1 0 0 0 1 0 x 1 x 0
0 0 1 1 0 1 x x 1 0
1 0 1 0 1 0 x x 0 0
0 1 0 0 0 x 1 0 x 0
1 1 0 0 1 x 1 1 x 1
0 1 1 x x x x x x x
1 1 1 x x x x x x x
K-map
With these Flip-Flop input functions and circuit output function we can draw
the logic diagram as follows.
44
www.Notesfree.in
www.Notesfree.in
2.Design a clocked sequential machine using T Flip-Flops for the following state diagram. Use state r
assignment.
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b d c 0 0
c a b 1 0
d b a 1 1
Even though a and c are having same next states for input X=0 and X=1, as
the outputs are not same state reduction is not possible.
45
www.Notesfree.in
www.Notesfree.in
State Assignment:
Use straight binary assignments as a= 00, b= 01, c= 10 and d= 11, the
transition table is,
Flip-Flop
Input Present state Next state Output
Inputs
X A B A B TA TB Y
0 0 0 0 0 0 0 0
0 0 1 1 1 1 0 0
0 1 0 0 0 1 0 1
0 1 1 0 1 1 0 1
1 0 0 0 1 0 1 0
1 0 1 1 0 1 1 0
1 1 0 0 1 1 1 0
1 1 1 0 0 1 1 1
K-map simplification:
46
www.Notesfree.in
www.Notesfree.in
SHIFT REGISTERS:
(i) Serial in- serial out (iii) Parallel in- serial out
(iii) Serial in- parallel out (iv) Parallel in- parallel out
47
www.Notesfree.in
www.Notesfree.in
The entry of the four bits 1010 into the register is illustrated below, beginning
with the right-most bit. The register is initially clear. The 0 is put onto the data input
line, making D=0 for FF0. When the first clock pulse is applied, FF0 is reset, thus
storing the 0.
Next the second bit, which is a 1, is applied to the data input, making D=1 for
FF0 and D=0 for FF1 because the D input of FF1 is connected to the Q0 output. When
the second clock pulse occurs, the 1 on the data input is shifted into FF0, causing FF0
to set; and the 0 that was in FF0 is shifted into FFl.
The third bit, a 0, is now put onto the data-input line, and a clock pulse is
applied. The 0 is entered into FF0, the 1 stored in FF0 is shifted into FFl, and the 0
stored in FF1 is shifted into FF2.
The last bit, a 1, is now applied to the data input, and a clock pulse is applied.
This time the 1 is entered into FF0, the 0 stored in FF0 is shifted into FFl, the 1 stored
in FF1 is shifted into FF2, and the 0 stored in FF2 is shifted into FF3. This completes
the serial entry of the four bits into the shift register, where they can be stored for
any length of time as long as the Flip-Flops have dc power.
To get the data out of the register, the bits must be shifted out serially and
taken off the Q3 output. After CLK4, the right-most bit, 0, appears on the Q3 output.
When clock pulse CLK5 is applied, the second bit appears on the Q3 output.
Clock pulse CLK6 shifts the third bit to the output, and CLK7 shifts the fourth bit to
the output. While the original four bits are being shifted out, more bits can be
shifted in. All zeros are shown being shifted out, more bits can be shifted in.
www.Notesfree.in
www.Notesfree.in
48
www.Notesfree.in
www.Notesfree.in
49
www.Notesfree.in
www.Notesfree.in
When SHIFT/LOAD is LOW, gates G1, G2, G3 and G4 are enabled, allowing
each data bit to be applied to the D input of its respective Flip-Flop. When a clock
pulse is applied, the Flip-Flops with D = 1 will set and those with D = 0 will reset,
thereby storing all four bits simultaneously.
When SHIFT/LOAD is HIGH, gates G1, G2, G3 and G4 are disabled and gates
G5, G6 and G7 are enabled, allowing the data bits to shift right from one stage to the
next. The OR gates allow either the normal shifting operation or the parallel data-
entry operation, depending on which AND gates are enabled by the level on the
SHIFT/LOAD input.
50
www.Notesfree.in
www.Notesfree.in
51
www.Notesfree.in
www.Notesfree.in
The input 0 in each MUX is selected when S 1S0= 00 and input 1 is selected
when S1S0= 01. Similarly inputs 2 and 3 are selected when S1S0= 10 and S1S0= 11
respectively. The inputs S1 and S0 control the mode of the operation of the register.
When S1S0= 00, the present value of the register is applied to the D-inputs of
the Flip-Flops. This is done by connecting the output of each Flip-Flop to the 0 input
of the respective multiplexer. The next clock pulse transfers into each Flip-Flop, the
binary value is held previously, and hence no change of state occurs.
When S1S0= 01, terminal 1 of the multiplexer inputs has a path to the D
inputs of the Flip-Flops. This causes a shift-right operation with the lefter serial
Mode Control
Operation
S1 S0
0 0 No change
0 1 Shift-right
1 0 Shift-left
1 1 Parallel load
A bidirectional shift register is one in which the data can be shifted either left
or right. It can be implemented by using gating logic that enables the transfer of a
data bit from one stage to the next stage to the right or to the left depending on the
level of a control line.
A 4-bit bidirectional shift register is shown below. A HIGH on the
RIGHT/LEFT control input allows data bits inside the register to be shifted to the
right, and a LOW enables data bits inside the register to be shifted to the left.
52
www.Notesfree.in
www.Notesfree.in
When the RIGHT/LEFT control input is HIGH, gates G1, G2, G3 and G4 are
enabled, and the state of the Q output of each Flip-Flop is passed through to the D
input of the following Flip-Flop. When a clock pulse occurs, the data bits are shifted
one place to the right.
When the RIGHT/LEFT control input is LOW, gates G5, G6, G7 and G8 are
enabled, and the Q output of each Flip-Flop is passed through to the D input of the
preceding Flip-Flop. When a clock pulse occurs, the data bits are then shifted one
place to the left.
SYNCHRONOUS COUNTERS:
53
www.Notesfree.in
www.Notesfree.in
Counters are classified into two broad categories according to the way they
are clocked:
Asynchronous
counters, Synchronous
counters.
In asynchronous (ripple) counters, the first Flip-Flop is clocked by the
external clock pulse and then each successive Flip-Flop is clocked by the output of
the preceding Flip-Flop.
In synchronous counters, the clock input is connected to all of the Flip-Flops
so that they are clocked simultaneously. Within each of these two categories,
counters are classified primarily by the type of sequence, the number of states, or
the
number of Flip-Flops in the counter.
The term ‘synchronous’ refers to events that have a fixed time
relationship with each other. In synchronous counter, the clock pulses are
applied to all Flip- Flops simultaneously. Hence there is minimum propagation
delay.
S.No Asynchronous (ripple) counter Synchronous counter
1 All the Flip-Flops are not All the Flip-Flops are clocked
clocked simultaneously. simultaneously.
2 The delay times of all Flip- There is minimum propagation delay.
Flops are added. Therefore
there is considerable
propagation delay.
3 Speed of operation is low Speed of operation is high.
www.Notesfree.in
www.Notesfree.in
In this counter the clock signal is connected in parallel to clock inputs of both
the Flip-Flops (FF0 and FF1). The output of FF0 is connected to J1 and K1 inputs of the
second Flip-Flop (FF1).
54
www.Notesfree.in
www.Notesfree.in
Assume that the counter is initially in the binary 0 state: i.e., both Flip-Flops
are RESET. When the positive edge of the first clock pulse is applied, FF 0 will toggle
because J0= k0= 1, whereas FF1 output will remain 0 because J1= k1= 0. After the first
clock pulse Q0=1 and Q1=0.
When the leading edge of CLK2 occurs, FF0 will toggle and Q0 will go LOW.
Since FF1 has a HIGH (Q0 = 1) on its J1 and K1 inputs at the triggering edge of this
clock pulse, the Flip-Flop toggles and Q1 goes HIGH. Thus, after CLK2,
Q0 = 0 and Q1 = 1.
When the leading edge of CLK3 occurs, FF0 again toggles to the SET state (Q0
= 1), and FF1 remains SET (Q1 = 1) because its J1 and K1 inputs are both LOW (Q0 = 0).
Finally, at the leading edge of CLK4, Q 0 and Q1 go LOW because they both
have a toggle condition on their J1 and K1 inputs. The counter has now recycled to its
original state, Q0 = Q1 = 0.
Timing diagram
www.Notesfree.in
www.Notesfree.in
55
www.Notesfree.in
www.Notesfree.in
CLOCK Pulse Q2 Q1 Q0
Initially 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
8 (recycles) 0 0 0
www.Notesfree.in
www.Notesfree.in
56
www.Notesfree.in
www.Notesfree.in
Timing diagram
57
www.Notesfree.in
www.Notesfree.in
J3 = K 3 = Q0 Q1 Q2 + Q0 Q3
This function is implemented with the AND/OR logic connected to the J3 and K3
inputs of FF3.
58
www.Notesfree.in
www.Notesfree.in
CLOCK Pulse Q3 Q2 Q1 Q0
Initially 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10(recycles) 0 0 0 0
The timing diagram for the decade counter is shown below.
59
www.Notesfree.in
www.Notesfree.in
60
www.Notesfree.in
www.Notesfree.in
MODULUS-N-COUNTERS:
The counter with ‘n’ Flip-Flops has maximum MOD number 2n. Find the
number of Flip-Flops (n) required for the desired MOD number (N) using the
equation,
2n ≥ N
(i) For example, a 3 bit binary counter is a MOD 8 counter. The basic counter can
be modified to produce MOD numbers less than 2n by allowing the counter to
skin those are normally part of counting sequence.
n= 3
N= 8
2n = 23= 8= N
1. Find the number of Flip-Flops (n) required for the desired MOD number (N)
61
www.Notesfree.in
www.Notesfree.in
When the counter reaches Nth state, the output of the NAND gate goes LOW,
resetting all Flip-Flops to 0. Therefore the counter counts from 0 through N-1.
62
www.Notesfree.in
www.Notesfree.in
The Q output of each stage is connected to the D input of the next stage
(assuming that D Flip-Flops are used). The complement output of the last stage is
connected back to the D input of the first stage.
Ring counter
The output Q0 sets D1 input, Q1 sets D2, Q2 sets D3 and Q3 is fed back to D0.
Because of these conditions, bits are shifted left one position per positive clock edge
63
www.Notesfree.in
www.Notesfree.in
and fed back to the input. All the Flip-Flops are clocked together. When CLR goes
low then back to high, the output is 0000.
The first positive clock edge shifts MSB to LSB position and other bits to one
position left so that the output becomes Q= 0010. This process continues on second
and third clock edge so that successive outputs are 0100 and 1000. The fourth
positive clock edge starts the cycle all over again and the output is 0001. Thus the
stored 1 bit follows a circular path (i.e., the stored 1 bits move left through all Flip-
Flops and the final Flip-Flop sends it back to the first Flip-Flop). This action has
given the name of ring counter.
DESIGN OF COUNTERS:
The procedure for design of counters as follows:
1. Specify the counter sequence and draw a state diagram.
2. Derive a next-state table from the state diagram.
3. Make the state assignment and develop a transition table showing the
flip- flop inputs required.
4. Draw the K-maps for each input of each Flip-Flop.
5. Derive the logic expression for each Flip-Flop input from the K-maps.
6. Implement the expressions with combinational logic and combine with
the Flip-Flops to form the counter.
64
www.Notesfree.in
www.Notesfree.in
Examples:
1. Using JK Flip-Flops, design a synchronous counter which counts in the
sequence, 000, 001, 010, 011, 100, 101, 110, 111, 000.
Step 1: State Diagram
65
www.Notesfree.in
www.Notesfree.in
State Diagram:
66
www.Notesfree.in
www.Notesfree.in
Excitation Table:
Excitation Table for JK Flip-Flop:
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation Table for Counter:
K-map
Logic Diagram:
67
www.Notesfree.in
www.Notesfree.in
Logic Diagram:
68
www.Notesfree.in
www.Notesfree.in
Excitation Table:
69
www.Notesfree.in
www.Notesfree.in
K-map Simplification:
70
www.Notesfree.in
www.Notesfree.in
Excitation Table:
Excitation Table for JK Flip-Flop:
71
www.Notesfree.in
www.Notesfree.in
K-map Simplification:
72
www.Notesfree.in
www.Notesfree.in
Logic Diagram:
6. Design a synchronous 3-bit gray code up counter with the help of excitation table.
Soln:
Gray code sequence: 000, 001, 011, 010, 110, 111, 101, 100.
State Diagram:
73
www.Notesfree.in
www.Notesfree.in
K-map Simplification:
74
www.Notesfree.in
www.Notesfree.in
Excitation Table:
Input Present State Next State A B C
Up/Down QA QB QC QA+1 QB+1 QC+1 JA KA JB KB JC KC
0 0 0 0 1 1 1 1 x 1 x 1 x
0 1 1 1 1 1 0 x 0 x 0 x 1
0 1 1 0 1 0 1 x 0 x 1 1 x
0 1 0 1 1 0 0 x 0 0 x x 1
0 1 0 0 0 1 1 x 1 1 x 1 x
0 0 1 1 0 1 0 0 x x 0 x 1
0 0 1 0 0 0 1 0 x x 1 1 x
0 0 0 1 0 0 0 0 x 0 x x 1
1 0 0 0 0 0 1 0 x 0 x 1 x
1 0 0 1 0 1 0 0 x 1 x x 1
1 0 1 0 0 1 1 0 x x 0 1 x
1 0 1 1 1 0 0 1 x x 1 x 1
1 1 0 0 1 0 1 x 0 0 x 1 x
1 1 0 1 1 1 0 x 0 1 x x 1
1 1 1 0 1 1 1 x 0 x 0 1 x
1 1 1 1 0 0 0 x 1 x 1 x 1
75
www.Notesfree.in
www.Notesfree.in
K-map Simplification:
Logic Diagram:
76
www.Notesfree.in
www.Notesfree.in
In a counter if the next state of some unused state is again a used state and if
by chance the counter happens to find itself in the unused states and never arrived
at a used state then the counter is said to be in the lockout condition.
Desired Sequence
r will come to the initial state from any unused state, the additional logic circuit is necessary. To ensure that the locko
77
www.Notesfree.in
www.Notesfree.in
Here, states 5, 2 and 0 are forced are forced to go into 6, 3 and 1state,
respectively to avoid lockout condition.
Excitation table:
Excitation Table for JK Flip-Flop:
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
K-map
78
www.Notesfree.in
www.Notesfree.in
Logic Diagram:
79
www.Notesfree.in
www.Notesfree.in
UNIT III
COMPUTER
FUNDAMENTALS
Functional units of a computer system are parts of the CPU (Central Processing Unit) that
performs the operations and calculations called for by the computer program. A computer
consists of five main components namely, Input unit, Central Processing Unit, Memory unit
Arithmetic & logical unit, Control unit and an Output unit.
www.Notesfree.in
www.Notesfree.in
Input unit
Central processing unit commonly known as CPU can be referred as an electronic circuitry
within a computer that carries out the instructions given by a computer program by
performing the basic arithmetic, logical, control and input/output (I/O) operations specified
by the instructions.
Memory unit
o The Memory unit can be referred to as the storage area in which programs are kept which are
running, and that contains data needed by the running programs.
o The Memory unit can be categorized in two ways namely, primary memory and secondary
memory.
o It enables a processor to access running execution applications and services that are
temporarily stored in a specific memory location.
www.Notesfree.in
www.Notesfree.in
o Primary storage is the fastest memory that operates at electronic speeds. Primary memory
contains a large number of semiconductor storage cells, capable of storing a bit of
information. The word length of a computer is between 16-64 bits.
o It is also known as the volatile form of memory, means when the computer is shut down,
anything contained in RAM is lost.
o Cache memory is also a kind of memory which is used to fetch the data very soon. They are
highly coupled with the processor.
o The most common examples of primary memory are RAM and ROM.
o Secondary memory is used when a large amount of data and programs have to be stored for a
long-term basis.
It is also known as the Non-volatile memory form of memory, means the data is stored permanently irrespective o
The most common examples of secondary memory are magnetic disks, magnetic tapes, and optical disks.
Arithmetic & logical unit
o The control unit is also known as the nerve center of a computer system.
o Let's us consider an example of addition of two operands by the instruction given as Add
LOCA, RO. This instruction adds the memory location LOCA to the operand in the register
RO and places the sum in the register RO. This instruction internally performs several steps.
Output Unit
o The primary function of the output unit is to send the processed results to the user. Output
devices display information in a way that the user can understand.
o Output devices are pieces of equipment that are used to generate information or any other
response processed by the computer. These devices display information that has been held or
generated within a computer.
www.Notesfree.in
www.Notesfree.in
Von-Neumann Model
Von-Neumann proposed his computer architecture design in 1945 which was later known as Von-
Neumann Architecture. It consisted of a Control Unit, Arithmetic, and Logical Memory Unit (ALU),
Registers and Inputs/Outputs.
Von Neumann architecture is based on the stored-program computer concept, where instruction data
and program data are stored in the same memory. This design is still used in most computers
produced today.
The part of the Computer that performs the bulk of data processing operations is called the Central
Processing Unit and is referred to as the CPU.
The Central Processing Unit can also be defined as an electric circuit responsible for executing the
instructions of a computer program.
The CPU performs a variety of functions dictated by the type of instructions that are incorporated in
the computer.
The major components of CPU are Arithmetic and Logic Unit (ALU), Control Unit (CU) and a
variety of registers.
The Arithmetic and Logic Unit (ALU) performs the required micro-operations for executing the
instructions. In simple words, ALU allows arithmetic (add, subtract, etc.) and logic (AND, OR,
NOT, etc.) operations to be carried out.
Control Unit
The Control Unit of a computer system controls the operations of components like ALU, memory
and input/output devices.
The Control Unit consists of a program counter that contains the address of the instructions to be
fetched and an instruction register into which instructions are fetched from memory for execution.
Registers
Registers refer to high-speed storage areas in the CPU. The data processed by the CPU are fetched
from the registers.
Following is the list of registers that plays a crucial role in data processing.
Registers Description
This register holds the memory location of the data that needs to be accessed.
MDR
MAR (Memory
(Memory Data Register)
Address Register)
This register holds the data that is being transferred to or from memory.
AC (Accumulator)
Buses
www.Notesfree.in
www.Notesfree.in
Buses are the means by which information is shared between the registers in a multiple-register
configuration system.
A bus structure consists of a set of common lines, one for each bit of a register, through which binary
information is transferred one at a time. Control signals determine which register is selected by the
bus during each particular register transfer.
Von-Neumann Architecture comprised of three major bus systems for data transfer.
Bus Description
Address Bus Address Bus carries the address of data (but not the data) between the processor and the memory.
Data Bus Data Bus carries data between the processor, the memory unit and the input/output devices.
Memory Unit
A memory unit is a collection of storage cells together with associated circuits needed to transfer
information in and out of the storage. The memory stores binary information in groups of bits called
words. The internal structure of a memory unit is specified by the number of words it contains and
the number of bits in each word.
Operands are definite elements of computer instruction that show what information is to be operated
on. The most important general categories of data are
1. Addresses
2. Numbers
3. Characters
4. Logical data
In many cases, some calculation must be performed on the operand reference to determine the main
or virtual memory address.
www.Notesfree.in
www.Notesfree.in
In this context, addresses can be considered to be unsigned integers. Other common data types are numbers,
characters, and logical data, and each of these is briefly described below. Some machines define specialized
data types or data structures. For example, machine operations may operate directly on a list or a string of
characters.
Addresses
Addresses are nothing but a form of data. Here some calculations must be performed on the operand
reference in an instruction, which is to determine the physical address of an instruction.
Numbers
All machine languages include numeric data types. Even in non-numeric data processing, numbers
are needed to act as counters, field widths, etc. An important difference between numbers used in
ordinary mathematics and numbers stored in a computer is that the latter is limited. Thus, the
programmer is faced with understanding the consequences of rounding, overflow and underflow.
Here are the three types of numerical data in computers, such as:
1. Integer or fixed point: Fixed point representation is used to store integers, the positive and
negative whole numbers (… -3, -2, -1, 0, 1, 2, 3, …). However, the programmer assigns a radix point
location to each number and tracks the radix point through every operation. High-level programs,
such as C and BASIC usually allocate 16 bits to store each integer. Each fixed point binary number
has three important parameters that describe it:
2. Floating point: A Floating Point number usually has a decimal point, which means 0, 3.14, 6.5,
and-125.5 are Floating Point
The term floating point is derived from the fact that there is no fixed number of digits before and
after the decimal point, which means the decimal point can float. There are also representations
in which the number of digits before and after the decimal point is set, called fixed-point
representations. In general, floating-point representations are slower and less accurate than fixed-
point representations, but they can handle a larger range of numbers.
3. Decimal number: The decimals are an extension of our number system. We also know that
decimals can be considered fractions with 10, 100, 1000, etc. The numbers expressed in the decimal
form are called decimal numbersor decimals. For example:1, 4.09, 13.83, etc. A decimal number
has two parts, and a dot separates these parts (.) called the decimal point.
Whole number part: The digits lying to the left of the decimal point form the whole number
part. The places begin with ones, tens, hundreds, thousands and so on.
Decimal part: The decimal point and the digits laying on the right of the decimal point form
the decimal part. The places begin with tenths, hundredths, thousandths and so on.
Characters
A common form of data is text or character strings. While textual data are most convenient for
humans. But computers work in binary. So, all characters, whether letters,
punctuwawtiown.Norotdeisgfirtese, .ainre
www.Notesfree.in
stored as binary numbers. All of the characters that a computer can use are called character sets.
Here are the two common standards, such as:
ASCII uses seven bits, giving a character set of 128 characters. The characters are represented in
a table called the ASCII table. The 128 characters include:
We can say that the letter 'A' is the first letter of the alphabet; 'B' is the second, and so on, all the
way up to 'Z', which is the 26th letter. In ASCII, each character has its own assigned number.
Denary, binary and hexadecimal representations of ASCII characters are shown in the below table.
A 65 1000001 41
Z 90 1011010 5A
a 97 1100001 61
z 122 1111010 7A
0 48 0110000 30
9 57 0111001 39
Space 32 0100000 20
! 33 0100001 21
Similarly, lower-case letters start at denary 97 (binary 1100001, hexadecimal 61) and end at denary
122 (binary 1111010, hexadecimal 7A). When data is stored or transmitted, its ASCII or Unicode
number is used, not the character itself.
On the other hand, IRA is also widely used outside the United States. A unique 7-bit pattern
represents each character in this code. Thus, 128 different characters can be represented, and
more
characters. Some control characters control the printing of characters on a page, and others are
concerned with communications procedures.
IRA-encoded characters are always stored and transmitted using 8 bits per character. The 8 bit may
be set to 0 or used as a parity bit for error detection. In the latter case, the bit is set such that the total
number of binary 1s in each octet is always odd (odd parity) or always even (even parity).
Logical data
Normally, each word or other addressable unit (byte, half-word, and so on) is treated as a single unit
of data. Sometimes, it is useful to consider an n-bit unit consisting of 1-bit items of data, each item
having the value 0 or 1. When data are viewed this way, they are considered to be logical data.
The Boolean data can only represent two values: true or false. Although only two values are
possible, they are rarely implemented as a single binary digit for efficiency reasons. Many
programming languages do not have an explicit Boolean type, instead of interpreting 0 as false and
other values as true. Boolean data refers to the logical structure of how the language is interpreted to
the machine language. In this case, a Boolean 0 refers to the logic False, and true is always a non
zero, especially one known as Boolean 1.
We may want to store an array of Boolean or binary data items, in which each item can take on only
the values 0 and 1. With logical data, memory can be used most efficiently for this storage.
There are occasions when we want to manipulate the bits of a data item.
The two main categories of instruction set architectures, CISC (such as Intel's x86 series) and RISC (such
as ARM and MIPS),
POP C - -
The i8086 has many instructions that use implicit operands although it has a general register set. The i8051 is anot
Stack
Advantages: Simple Model of expression evaluation (reverse polish). Short instructions. Disadvantages: A stack
RISC stands for Reduced Instruction Set Computer. The ISA is composed of instructions that all
have exactly the same size, usualy 32 bits. Thus they can be pre-fetched and pipelined succesfuly.
All ALU instructions have 3 operands which are only registers. The only memory access is through
explicit LOAD/STORE instructions.
Thus C = A + B will be assembled as:
LOAD R1,A
LOAD R2,B
ADD R3,R1,R2
STORE C,R3
www.Notesfree.in
www.Notesfree.in
The memory consists of many millions of storage cells, each of which can store a bit of information having the
value 0 or 1. Because a single bit represents a very small amount of information, bits are seldom handled
individually.
The usual approach is to deal with them in groups of fixed size. For this purpose, the memory is organized so that a group of n bits
Modern computers have word lengths that typically range from 16 to 64 bits. If the word length of a computer
is 32 bits, a single word can store a 32-bit signed number or four ASCII-encoded characters, each occupying 8
bits, as shown in Figure
www.Notesfree.in
www.Notesfree.in
A unit of 8 bits is called a byte. Machine instructions may require one or more words for their representation.
After we have described instructions at the assembly-language level. Accessing the memory to store or
retrieve a single item of information, either a word or a byte, requires distinct names or addresses for each
location. It is customary to use numbers from 0 to 2k − 1, for some suitable value of k, as the addresses of
successive locations in the memory. Thus, the memory can have up to 2k addressable locations. The 2k
addresses constitute the address space of the computer. For example, a 24-bit address generates an address
space of 224 (16,777,216) locations. This number is usually written as 16M (16 mega), where 1M is the
number 220 (1,048,576). A 32-bit address creates an address space of 232 or 4G (4 giga) locations, where 1G
is 230. Other notational conventions that are commonly used are K (kilo) for the number 210 (1,024), and T
(tera) for the number 240
Byte Addressability :
A byte is always 8 bits, but the word length typically ranges from 16 to 64 bits. It is impractical to assign
distinct addresses to individual bit locations in the memory. The most practical assignment is to have
successive addresses refer to successive byte locations in the memory. This is the assignment used in
most modern computers. The term byte-addressable memory is used for this assignment. Byte locations
have addresses 0, 1, 2,. Thus, if the word length of the machine is 32 bits, successive words are
located at
addresses 0, 4, 8,. , with each word consisting of four bytes.
There are two ways that byte addresses can be assigned across words big-endian and Little endian
www.Notesfree.in
www.Notesfree.in
The name big-endian is used when lower byte addresses are used for the more significant bytes (the leftmost
bytes) of the word.
The name little-endian is used for the opposite ordering, where the lower byte addresses are used for the less
significant bytes (the rightmost bytes) of the word. The words “more significant” and “less significant” are
used in relation to the weights (powers of 2) assigned to bits when the word represents a number. Both little-
endian and big-endian assignments are used in commercial machines. In both cases, byte addresses 0, 4, 8,...,
are taken as the addresses of successive words in the memory of a computer with a 32-bit word length. These
are the addresses used when accessing the memory to store or retrieve a word.
Memory Operations
Both program instructions and data operands are stored in the memory. To execute an instruction, the
processor control circuits must cause the word (or words) containing the instruction to be transferred from the
memory to the processor. Operands and results must also be moved between the memory and the processor.
Thus, two basic operations involving the memory are needed, , namely Read and Write.
Read Operation:
The Read operation transfers a copy of the contents of a specific memory location to the processor. The
memory contents remain unchanged. To start a Read operation, the processor sends the address of the desired
location to the memory and requests that its contents be read. The memory reads the data stored at that
address and sends them to the processor.
Write Operation:
The Write operation transfers an item of information from the processor to a specific memory location,
overwriting the former contents of that location. To initiate a Write operation, the processor sends the address
of the desired location to the memory, together with the data to be written into that location. The memory then
uses the address and data to perform the write.
The tasks carried out by a computer program consist of a sequence of small steps, such as adding two
numbers, testing for a particular condition, reading a character from the keyboard, or sending a character to be
displayed on a display screen.
• I/O transfers
We begin by discussing instructions for the first two types of operations. To facilitate the discussion, we first
need some notation
Register Transfer Notation
We need to describe the transfer of information from one location in a computer to another. Possible locations that may be in
Example 1:
Assembly-Language Notation:
Example 1:
a generic instruction that causes the transfer described above, from memory location LOC to processor
register R2, The contents of LOC are unchanged by the execution of this instruction, but the old contents of
register R2 are overwritten. The name Load is appropriate for this instruction, because the contents read from
a memory location are loaded into a processor register
Example 2:
adding two numbers contained in processor registers R2 and R3 and placing their sum in R4 can be specified
by the assembly-language statement, registers R2 and R3 hold the source operands, while R4 is the
destination
One of the most important characteristics that distinguish different computers is the nature of their
instructions. There are two fundamentally different approaches in the design of instruction sets for modern
computers. One popular approach is based on the premise that higher performance can be achieved if each
instruction occupies exactly one word in memory, and all operands needed to execute a given arithmetic or
logic operation specified by an instruction are already in processor registers. This approach is conducive to an
implementation of the processing unit in which the various operations needed to process a sequence of
instructions are performed in “pipelined” fashion to overlap activity and reduce total execution time of a
program. The restriction that each instruction must fit into a single word reduces the complexity and the
number of different types of instructions that may be included in the instruction set of a computer. Such
computers are called Reduced Instruction Set Computers (RISC).
An alternative to the RISC approach is to make use of more complex instructions which may span more than
one word of memory, and which may specify more complicated operations. This approach was prevalent prior
to the introduction of the RISC approach in the 1970s. Although the use of complex instructions was not
originally identified by any particular label, computers based on this idea have been subsequently called
Complex Instruction Set Computers (CISC).
Two key characteristics of RISC instruction sets are: • Each instruction fits in a single word. • A load/store
architecture is used, in which – Memory operands are accessed only using Load and Store instructions. – All
operands involved in an arithmetic or logic operation must either be in processor registers, or one of the
operands may be given explicitly within the instruction word.
At the start of execution of a program, all instructions and data used in the program are stored in the memory
of a computer. Processor registers do not contain valid operands at that time . If operands are expected to be in
processor registers before they can be used by an instruction, then it is necessary to first bring these operands
into the registers. This task is done by Load instructions which copy the contents of a memory location into a
processor register. Load instructions are of the form
Or more specifically
Example:
The statement C = A + B
The required action can be accomplished by a sequence of simple machine instructions. We choose to use
registers R2, R3, and R4 to perform the task with four instructions:
Load R2, A
Load R3, B
Store R4, C
We assume that the word length is 32 bits and the memory is byte-addressable. The four instructions of the
program are in successive word locations, starting at location i. Since each instruction is 4 bytes long, the
second, third, and fourth instructions are at addresses i + 4, i + 8, and i + 12. For simplicity, we assume that a
desired memory address can be directly specified in Load and Store instructions, although this is not possible
if a full 32-bit address is involved.
Let us consider how this program is executed. The processor contains a register called the program counter
(PC), which holds the address of the next instruction to be executed. To begin executing a program, the
address of its first instruction (i in our example) must be placed into the PC. Then, the processor control
circuits use the information in the PC to fetch and execute instructions, one at a time, in the order of
increasing addresses. This is called straight-line sequencing. During the execution of each instruction, the PC
is incremented by 4 to point to the next instruction. Thus, after the Store instruction at location i + 12 is
executed, the PC contains the value i + 16, which is the address of the first instruction of the next program
segment. Executing a given instruction is a two-phase procedure. In the first phase, called instruction fetch,
the instruction is fetched from the memory location whose address is in the PC. This instruction is placed in
the instruction register (IR) in the processor. At the start of the second phase, called instruction execute, the
instruction in IR is examined to determine which operation is to be performed. The specified operation is then
performed by the processor. This involves a small number of steps such as fetching operands from the
memory or from processor registers, performing an arithmetic or logic operation, and storing the result in the
www.Notesfree.in
destination location. At some point during this two-phase procedure, the PC contents of the
are advanced
www.Notesfree.in
point to the next instruction. When the execute phase of an instruction is completed, the PC contains the
address of the next instruction, and a new instruction fetch phase can begin.
Branching:
The addresses of the memory locations containing the n numbers are symbolically given as NUM1, NUM2,...,
NUMn, and separate Load and Add instructions are used to add each number to the contents of register R2.
After all the numbers have been added, the result is placed in memory location SUM.
it is possible to implement a program loop in which the instructions read the next number in the list and add it
to the current sum. To add all numbers, the loop has to be executed as many times as there are numbers in the
list. The body of the loop is a straight-line sequence of instructions executed repeatedly. It starts at location
LOOP and ends at the instruction Branch_if_[R2]>0. During each pass through this loop, the address of the
next list entry is determined, and that entry is loaded into R5 and added to R3. The address of an operand can
be specified in various ways, as will be described in Section 2.4. For now, we concentrate on how to create
and control a program loop. Assume that the number of entries in the list, n, is stored in memory location N,
as shown. Register R2 is used as a counter to determine the number of times the loop is executed. Hence, the
contents of location N are loaded into register R2 at the beginning of the program. Then, within the body of
the loop, the instruction.
www.Notesfree.in
www.Notesfree.in
Branch_if_[R2]>0 LOOP
is a conditional branch instruction that causes a branch to location LOOP if the contents of register R2 are
greater than zero. This means that the loop is repeated as long as there are entries in the list that are yet to be
added to R3. At the end of the nth pass through the loop, the Subtract instruction produces a value of zero in
R2, and, hence, branching does not occur. Instead, the Store instruction is fetched and executed. It moves the
final result from R3 into memory location SUM.
The above instruction representative of a class of three-operand instructions that use operands in processor
registers. Registers Rdst, Rsrc1, and Rsrc2 hold the destination and two source operands. If a processor has
32 registers, then it is necessary to use five bits to specify each of the three registers in such instructions. If
each instruction is implemented in a 32-bit word, the remaining 17 bits can be used to specify the OP code
that indicates the operation to be performed
Of the 32 bits available, ten bits are needed to specify the two registers. The remaining 22 bits must give the
OP code and the value of the immediate operand. The most useful sizes of immediate operands are 32, 16,
and 8 bits. Since 32 bits are not available, a good choice is to allocate 16 bits for the immediate operand. This
leaves six bits for specifying the OP code.
This format can also be used for Load and Store instructions, where the Index addressing mode uses the 16-bit
field to specify the offset that is added to the contents of the index register.
The format in Figure b can also be used to encode the Branch instructions. The Branch-greater-than
instruction at memory address 128.
if the contents of register R0 are zero. The registers R2 and R0 can be specified in the two register fields in
Figure b. The six-bit OP code has to identify the BGT operation. The 16-bit immediate field can be used to
provide the information needed to determine the branch target address, which is the location of the instruction
with the label LOOP. The target address generally comprises 32 bits. Since there is no space for 32 bits, the
BGT instruction makes use of the immediate field to give an offset from the location of
wthwis win.sNtroutcetsiofrneien.itnhe
www.Notesfree.in
program to the required branch target. At the time the BGT instruction is being executed, the program
counter, PC, has been incremented to point to the next instruction, which is the Store instruction at address
132. Therefore, the branch offset is 132 − 112 = 20. Since the processor computes the target address by
adding the current contents of the PC and the branch offset, the required offset in this example is negative,
namely −20. Finally, we should consider the Call instruction, which is used to call a subroutine. It only needs
to specify the OP code and an immediate value that is used to determine the address of the first instruction in
the subroutine. If six bits are used for the OP code, then the remaining 26 bits can be used to denote the
immediate value. This gives the format shown in c.
Addressing Modes:
The operation field of an instruction specifies the operation to be performed. And this
operation must be performed on some data. So each instruction need to specify data on
which the operation is to be performed. But the operand(data) may be in accumulator,
general purpose register or at some specified memory location. So,
appwrwopwri.Natoe telsofcraetei.oinn
www.Notesfree.in
(address) of data is need to be specified, and in computer, there are various ways of
specifying the address of data. These various ways of specifying the address of data are
known as “Addressing Modes”
So Addressing Modes can be defined as “The technique for specifying the address of the
operands “ And in computer the address of operand i.e., the address where operand is actually
found is known as “Effective Address”. Now, in addition to this, the two most prominent
reason of why addressing modes are so important are:
First, the way the operand data are chosen during program execution is dependent on theaddressing mode of t
Second, the address field(or fields) in a typical instruction format are relatively small and sometimes we wou
www.Notesfree.in
www.Notesfree.in
Implied Addressing Mode also known as "Implicit" or "Inherent“ addressing mode is the
addressing mode in which, no operand(register or memory location or data) is specified in the
instruction. As in this mode the operand are specified implicit in the definition of instruction.
instruction.
www.Notesfree.in
www.Notesfree.in
Example 1 :
MOV CL, 03H
operand Example 2:
www.Notesfree.in
www.Notesfree.in
operand.
www.Notesfree.in
www.Notesfree.in
It consists of 3-bit opcode, 12-bit address and a mode bit designated as( I).The mode bit (I) is
zero for Direct Address and 1 for Indirect Address. Now we will discuss about each in detail
one by one.
Means, here, Control fetches the instruction from memory and then uses its address part to
access memory again to read Effective Address.
[M=Memory]
i.e., (A)=1350=EA
3.Register
Addressing
Mode:
In Register Addressing Mode, the operands are in registers that reside within the CPU. That is, in
this mode, instruction specifies a register in CPU, which contain the operand. It is like Direct
Addressing Mode, the only difference is that the address field refers to a register instead of
memory location. www.Notesfree.in
www.Notesfree.in
i.e., EA=R
Thus, for a Register Addressing Mode, there is no need to compute the actual address as the
operand is in a register and to get operand there is no memory access involved
i.e., EA=(R)
Means, control fetches instruction from memory and then uses its address to access Register and
looks in Register(R) for effective address of operand in memory.
From above example, it is clear that, the instruction(MOV AL, [BX]) specifies a register[BX], and in coding of r
Auto-increment Addressing Mode are similar to Register Indirect Addressing Mode except that
the register is incremented after its value is loaded (or accessed) at another location like
accumulator(AC).
EA=(R)
www.Notesfree.in
www.Notesfree.in
As an example:
Here, we see that effective address is (R )=400 and operand in AC is 7. And after loading R1 is
incremented by 1.It becomes 401.
Means, here we see that, in the Auto-increment mode, the R1 register is increment to 401 after
execution of instruction.
EA=(R) - 1
As an example:
It look like as shown below:
www.Notesfree.in
www.Notesfree.in
Here, we see that, in the Auto-decrement mode, the register R1 is decremented to 399 prior to execution of the
accumulator is 700.
Means, Displacement Addressing Modes requires that the instruction have two address fields, at
least one of which is explicit means, one is address field indicate direct address and other
indicate indirect address.
That is, value contained in one addressing field is A, which is used directly and the value in other
address field is R, which refers to a register whose contents are to be added to produce effective
address.
www.Notesfree.in
www.Notesfree.in
There are three areas where Displacement Addressing modes are used. In other words,
Displacement Based Addressing Modes are of three types. These are:
In Relative Addressing Mode , the contents of program counter is added to the address part of
instruction to obtain the Effective Address.
That is, in Relative Addressing Mode, the address field of the instruction is added to implicitly
reference register Program Counter to obtain effective address.
i.e., EA=A+PC
www.Notesfree.in
www.Notesfree.in
Assume that PC contains the no.- 825 and the address part of instruction contain the no.- 24,
then the instruction at location 825 is read from memory during fetch phase and the Program
Counter is then incremented by one to 826.
field of instruction point to Index Register, which is a special CPU register that contain an
Indexed value, and direct addressing field contain base address.
As, indexed type instruction make sense that data array is in memory and each operand in the
array is stored in memory relative to base address. And the distance between the beginning
address and the address of operand is the indexed value stored in indexed register.
www.Notesfree.in
www.Notesfree.in
Any operand in the array can be accessed with the same instruction, which provided that the
index register contains the correct index value i.e., the index register can be incremented to
facilitate access to consecutive operands.
mode EA=A+Index
Means, in it the register indirect address field point to the Base Register and to obtain EA, the
contents of Instruction Register, is added to direct address part of the instruction.
This is similar to indexed addressing mode except that the register is now called as Base Register
instead of Index Register.
www.Notesfree.in
www.Notesfree.in
Thus, the difference between Base and Index mode is in the way they are used rather than the way they are c
So, only the value of Base Register requires updation to reflect the beginning of new
memory segment.
www.Notesfree.in
www.Notesfree.in
UNIT IV PROCESSOR
Instruction Execution
The execution of an instruction in a processor can be split up into a number of stages. How
many stages there are, and the purpose of each stage is different for each processor design.
Examples includes 2 stages (Instruction Fetch / Instruction Execute) and 3 stages (Instruction
Fetch, Instruction Decode, Instruction Execute).
The Instruction Decode stage decodes the instruction in the IR, calculates the next PC,
ID
and reads any operands required from the register file.
nops (a nop, no-op or no-operation instruction simply does nothing) and s (stores).
www.Notesfree.in
www.Notesfree.in
A logic element need to be added that chooses from among the multiple sources and
www.Notesfree.in
www.Notesfree.in
Fig 3.1:An abstract view of the implementation of the MIPS subset showing the
major functional units and the major connections between them.
www.Notesfree.in
www.Notesfree.in
BUILDING DATAPATH
Single-cycle Datapath:
Each instruction executes in a single cycle
Multi-cycle Datapath:
Each instruction is broken up into a series of shorter steps
www.Notesfree.in
www.Notesfree.in
Pipelined Datapath:
Each instruction is broken up into a series of steps; Multiple instructions
execute at once
Differences between single cycle and multi cycle datapath
Single cycle Data Path:
o Each instruction is processed in one (long) clock cycle
o Two separate memory units for instructions and data.
flows.
To share a datapath element between two different instruction classes, we may need to
allow multiple connections to the input of an element, using a multiplexor and control
signal to select among the multiple inputs.
A reasonable way to start a datapath design is to examine the major components
required to execute each class of MIPS instructions. Let’s start at the top by looking at
which datapath elements each instruction needs, and then work our way down
through the levels of abstraction. When we show the datapath elements, we will also
www.Notesfree.in
www.Notesfree.in
show their control signals. We use abstraction in this explanation, starting from the
bottom up.
Datapath Element
A unit used to operate on or hold data within a processor. In the MIPS implementation, the
datapath elements include the instruction and data memories, the register file, the ALU, and
adders.
Program Counter (PC)
Figure 3.3a shows the first element we need: a memory unit to store the instructions
of a program and supply instructions given an address. Figure 3.3 b also shows the
program counter (PC), the register containing the address of the instruction in the
program being executed.
Lastly, we will need an adder to increment the PC to the address of the next
instruction. This adder, which is combinational, can be built from the ALU simply by
wiring the control lines so that the control always specifies an add operation.
We will draw such an ALU with the label Add, as in Figure 3.3c, to indicate that it has
been permanently made an adder and cannot perform the other ALU functions.
To execute any instruction, we must start by fetching the instruction from memory.
To prepare for executing the next instruction, we must also increment the program
counter so that it points at the next instruction, 4 bytes later.
Fig 3.3: Two state elements are needed to store and access instructions, and an adder is needed
to compute the next instruction address.
Figure 3.4 shows how to combine the three elements to form a datapath that fetches
instructions and increments the PC to obtain the address of the next sequential instruction.
www.Notesfree.in
www.Notesfree.in
Fig 3.4: A portion of the datapath used for fetching instructions and incrementing the
program counter. The fetched instruction is used by other parts of the datapath
R-FORMAT INSTRUCTIONS
To perform any operation we required two registers, perform an ALU operation on the contents of the registe
For each data word to be read from the registers, we need an input to the register file
that specifies the register number to be read and an output from the register file that
will carry the value that has been read from the registers.
To write a data word, we will need two inputs: one to specify the register number to
be written and one to supply the data to be written into the register. The register file
always outputs the contents of whatever register numbers are on the Read register
inputs. Writes, however, are controlled by the write control signal, which must be
asserted for a write to occur at the clock edge.
www.Notesfree.in
www.Notesfree.in
Figure 3.5a shows the result; we need a total of four inputs (three for register numbers
and one for data) and two outputs (both for data). The register number inputs are 5
bits wide to specify one of 32 registers (32 = 2 5), whereas the data input and two data
output buses are each 32 bits wide.
Figure 3.5b shows the ALU, which takes two 32-bit inputs and produces a 32-bit
result, as well as a 1-bit signal if the result is 0.
Fig 3.5: The two elements needed to implement R-format ALU operations are the
address is computed by adding the base register($t2), to the 16-bit signed off set,
www.Notesfree.in
www.Notesfree.in
Fig 3.6: The two units needed to implement loads and stores, in addition to the register
file and ALU of Figure 3.5
www.Notesfree.in
www.Notesfree.in
Since that ALU provides an output signal that indicates whether the result was 0, we can send
the two register operands to the ALU with the control set to do a subtract. If the Zero signal
out of the ALU unit is asserted, we know that the two values are equal. Although the Zero
output always signals if the result is 0, we will be using it only to implement the equal test of
branches. Later, we will show exactly how to connect the control signals of the ALU for use
in the datapath.
The jump instruction operates by replacing the lower 28 bits of the PC with the lower 26 bits
of the instruction shifted left by 2 bits.
CONTROL IMPLEMENTATION SCHEME
This simple implementation covers load word (lw), store word (sw), branch equal (beq), and
the arithmetic-logical instructions add, sub, AND, OR, and set on less than.
The ALU Control
The MIPS ALU defines the 6 following combinations of four control inputs:
functions. For branch equal, the ALU must perform a subtraction. We can generate the 4-bit
ALU control input using a small control unit that has as inputs the function field of the
instruction and a 2-bit control field, which we call ALUOp.
ALUOp indicates whether the operation to be performed should be add (00) for loads and
stores, subtract (01) for beq, or determined by the operation encoded in the funct field (10).
The output of the ALU control unit is a 4-bit signal that directly controls the ALU by
generating one of the 4-bit combinations shown previously. In Figure 3.2, we show how to
set the ALU control inputs based on the 2-bit ALUOp control and the 6-bit function code.
Later in this chapter we will see how the ALUOp bits are generated from the main control
unit.
www.Notesfree.in
www.Notesfree.in
Table 3.2: The ALUOp control bits and the different function codes for the R-
type instruction.
Using multiple levels of control can reduce the size of the main control unit. Using several
smaller control units may also potentially increase the speed of the control unit. There are
several different ways to implement the mapping from the 2-bit ALUOp field and the 6-bit
funct field to the four ALU operation control bits.
Table 3.3; shows the truth table how the 4-bit ALU control is set depending on these two
input fields. Since the full truth table is very large (28 = 256 entries), we show only the truth
table entries for which the ALU control must have a specific value
Table 3.3: The truth table for the 4 ALU control bits (called Operation)
Designing the Main Control Unit
To understand how to connect the fields of an instruction to the datapath, it is useful to
review the formats of the three instruction classes: the R-type, branch, and load-store
instructions. Figure 3.8 shows these formats.
www.Notesfree.in
www.Notesfree.in
FIGURE 3.8 The three instruction classes (R-type, load and store, and branch) use two
different instruction formats.
Th us, we will need to add a multiplexor to select which fi eld of the instruction is used to
indicate the register number to be written. The first design principle simplicity favors
regularity—pays off here in specifying control.
www.Notesfree.in
www.Notesfree.in
be asserted (activated or set true) if the instruction is branch on equal (a decision that
the control unit can make) and the Zero output of the ALU, which is used for equality
comparison, is asserted. To generate the PCSrc signal, we will need to AND together
a signal from the control unit, which we call Branch, with the Zero signal out of the
ALU.
www.Notesfree.in
www.Notesfree.in
Table 3.6: The control function for the simple single-cycle implementation is completely
specified by this truth table.
www.Notesfree.in
www.Notesfree.in
that a multiplexor whose control is 0 has a definite action, even if its control line is not
highlighted. Multiple-bit control signals are highlighted if any constituent signal is asserted.
Table 3.5: The setting of the control lines is completely determined by the opcode fields
of the instruction.
Fig 3.10: shows the datapath with the control unit and the control signals. The setting of the
control lines depends only on the opcode, we defi ne whether each control signal should be 0, 1, or don’t care (X
www.Notesfree.in
www.Notesfree.in
2. Two registers, $t2 and $t3, are read from the register file; also, the main control unit
computes the setting of the control lines during this step.
3. The ALU operates on the data read from the register file, using the function code (bits 5:0,
which is the funct field, of the instruction) to generate the ALU function.
4. The result from the ALU is written into the register file using bits 15:11 of the instruction
to select the destination register ($t1).
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
www.Notesfree.in
PIPELINING
Pipelining is an implementation technique in which multiple instructions are overlapped in
execution. The computer pipeline is divided in stages. Each stage completes a part of an
www.Notesfree.in
www.Notesfree.in
instruction in parallel. The stages are connected one to the next to form a pipe - instructions
enter at one end, progress through the stages, and exit at the other end.
Today,. The non-pipelined approach to laundry would be as follows:
1. Place one dirty load of clothes in the washer.
2. When the washer is finished, place the wet load in the dryer.
3. When the dryer is finished, place the dry load on a table and fold.
4. When folding is finished, ask your roommate to put the clothes
away. When your roommate is done, start over with the next dirty load.
www.Notesfree.in
www.Notesfree.in
With pipelining, the computer architecture allows the next instructions to be fetched
while the processor is performing arithmetic operations, holding them in a buffer
close to the processor until each instruction operation can performed.
How Pipelines Works
The pipeline is divided into segments and each segment can execute it operation
concurrently with the other segments. Once a segment completes an operations, it passes
the result to the next segment in the pipeline and fetches the next operations from the
preceding segment.
The pipelined approach takes much less time, as Figure 3.14 shows. As soon as the
washer is finished with the first load and placed in the dryer, you load the washer with the second dirty lo
www.Notesfree.in
www.Notesfree.in
First, all MIPS instructions are the same length. This restriction makes it much easier
to fetch instructions in the first pipeline stage and to decode them in the second
stage.
Second, MIPS have only a few instruction formats, with the source register fields
being located in the same place in each instruction. This symmetry means that the
second stage can begin reading the register file at the same time that the hardware is
determining what type of instruction was fetched.
Third, memory operands only appear in loads or stores in MIPS. This restriction
means we can use the execute stage to calculate the memory address and then access
memory in the following stage.
Fourth, operands must be aligned in memory. Hence, we need not worry about a
single data transfer instruction requiring two data memory accesses; the requested data can be transferred betwe
PIPELINE HAZARDS
There are situations in pipelining when the next instruction cannot execute in the following clock cycle. Thes
combination instead of a separate washer and dryer, or if our roommate was busy
doing something else and wouldn’t put clothes away. Our carefully scheduled pipeline
plans would then be foiled.
www.Notesfree.in
www.Notesfree.in
It is also called a pipeline data hazard. When a planned instruction cannot execute in
the proper clock cycle because data that is needed to execute the instruction is not yet
available.
Data hazards occur when the pipeline must be stalled because one step must wait for
another to complete.
In a computer pipeline, data hazards arise from the dependence of one instruction on
an earlier one that is still in the pipeline. For example, suppose we have an add
instruction followed immediately by a subtract instruction that uses the sum ($s0):
add $s0, $t0, $t1
sub $t2, $s0, $t3
www.Notesfree.in
www.Notesfree.in
Without intervention, a data hazard could severely stall the pipeline. The add
instruction doesn’t write its result until the fifth stage, meaning that we would have to
waste three clock cycles in the pipeline.
To resolve the data hazard, for the code sequence above, as soon as the ALU creates
the sum for the add operation, we can supply it as an input for the subtract. This is
done by adding extra hardware to retrieve the missing item early from the internal
resources is called forwarding or bypassing.
Figure below shows the connection to forward the value in $s0 after the
execution stage of the add instruction as input to the execution stage of the sub
instruction.
lw $s0, 20($t1)
Even with forwarding, we would have to stall one stage for a load-use data hazard, this
www.Notesfree.in
www.Notesfree.in
figure shows below an important pipeline concept, officially called a pipeline stall, but
www.Notesfree.in
www.Notesfree.in
Fig 3.17: A stall even with forwarding when an R-format instruction following a load tries to use t
CONTROL HAZARDS
It is also called as branch hazard. When the proper instruction cannot execute in the proper pipeline clock c
Nevertheless, the pipeline cannot possibly know what the next instruction should be,
www.Notesfree.in
www.Notesfree.in
BRANCH PREDICTION
A method of resolving a branch hazard that assumes a given outcome for the branch and proceeds from that a
Static branch prediction
was and guess at the formula, adjusting the next prediction depending on the success of
recent guesses. One popular approach to dynamic prediction of branches is keeping a history
for each branch as taken or untaken, and then using the recent past behavior to predict the
future.
PIPELINED DATAPATH
Figure 3.19 shows the single-cycle datapath from with the pipeline stages identified. The
division of an instruction into five stages means a five-stage pipeline, which in turn means
that up to five instructions will be in execution during any single clock cycle. Thus, we must
www.Notesfree.in
www.Notesfree.in
separate the datapath into five pieces, with each piece named corresponding to a stage of
instruction execution:
1. IF: Instruction fetch
2. ID: Instruction decode and register fi le read
3. EX: Execution or address calculation
4. MEM: Data memory access
5. WB: Write back
In Figure 3.19, these five components correspond roughly to the way the datapath is
drawn; instructions and data move generally from left to right through the five stages as
they complete execution.
www.Notesfree.in
www.Notesfree.in
■ The selection of the next value of the PC, choosing between the incremented PC and
the branch address from the MEM stage
Data flowing from right to left does not aff ect the current instruction; these reverse data
movements influence only later instructions in the pipeline. Note that the fi rst right-to-
left flow of data can lead to data hazards and the second leads to control hazards.
One way to show what happens in pipelined execution is to pretend that each instruction
has its own datapath, and then to place these datapaths on a timeline to show their
relationship. Figure 3.20 shows the execution of the instructions in Figure 4.27 by
displaying their private datapaths on a common timeline. Instead, we add registers to hold
data so that portions of a single datapath can be shared during instruction execution.
For example, as Figure 4.34 shows, the instruction memory is used during only one of the
five stages of an instruction, allowing it to be shared by following instructions during the
other four stages.
To retain the value of an individual instruction for its other four stages, the value read
from instruction memory must be saved in a register. Returning to our laundry analogy,
we might have a basket between each pair of stages to hold the clothes for the next step.
www.Notesfree.in
www.Notesfree.in
Figure 3.21 shows the pipelined datapath with the pipeline registers highlighted. All
instructions advance during each clock cycle from one pipeline register to the next. The
registers are named for the two stages separated by that register. For example, the pipeline
register between the IF and ID stages is called IF/ID. Notice that there is no pipeline
register at the end of the write-back stage.
All instructions must update some state in the processor—the register file, memory, or the
PC.
www.Notesfree.in
www.Notesfree.in
is sign-extended to 32 bits, and the register numbers to read the two registers. All three values
are stored in the ID/EX pipeline register, along with the incremented PC address.
FIGURE 3.22: IF and ID: First and second pipe stages of an instruction, with the active
portions of the datapath in Figure 3.21 highlighted.
3. Execute or address calculation: Figure 3.23 shows that the load instruction reads the
contents of register 1 and the sign-extended immediate from the ID/EX pipeline register and
adds them using the ALU. That sum is placed in the EX/MEM pipeline register.
www.Notesfree.in
www.Notesfree.in
FIGURE 3.23 EX: The third pipe stage of a load instruction, highlighting the portions
of the datapath in Figure 3.21 used in this pipe stage.
www.Notesfree.in
www.Notesfree.in
FIGURE 3.24 MEM and WB: The fourth and fifth pipe stages of a load instruction,
highlighting the portions of the datapath in Figure 3.21 used in this pipe stage.
Walking through a store instruction shows the similarity of instruction execution, as well as
passing the information for later stages. Here are the five pipe stages of the store instruction:
www.Notesfree.in
www.Notesfree.in
1. Instruction fetch: The instruction is read from memory using the address in the PC and
then is placed in the IF/ID pipeline register. This stage occurs before the instruction is
identified, so the top portion of Figure 3.25 works for store as well as load.
stores. These first two stages are executed by all instructions, since it is too early to know the
www.Notesfree.in
www.Notesfree.in
FIGURE 3.26: MEM and WB: The fourth and fifth pipe stages of a store
instruction.
5. Write-back: The bottom portion of Figure 3.26 shows the final step of the store. For this
instruction, nothing happens in the write-back stage.
For the store instruction we needed to pass one of the registers read in the ID stage to the
MEM stage, where it is stored in memory. The data was first placed in the ID/EX pipeline
register and then passed to the EX/MEM pipeline register.
www.Notesfree.in
www.Notesfree.in
PIPELINED CONTROL
Adding control to the pipelined datapath is referred to as pipelined control. It is started with a
simple design that views the problem through pipeline bars in between the stages. The first
step is to label the control lines on the existing datapath. Figure 3.27 shows those lines.
FIGURE 3.27: The pipelined datapath with the control signals identified.
To specify control for the pipeline, we need only set the control values during each
pipeline stage. Because each control line is associated with a component active in only a
single pipeline stage, we can divide the control lines into five groups according to the
pipeline stage.
1. Instruction fetch: The control signals to read instruction memory and to write the PC are
always asserted, so there is nothing special to control in this pipeline stage.
2. Instruction decode/register file read: As in the previous stage, the same thing happens at
every clock cycle, so there are no optional control lines to set.
www.Notesfree.in
www.Notesfree.in
3. Execution/address calculation: The signals to be set are RegDst, ALUOp, and ALUSrc
(see Figures 4.48). The signals select the Result register, the ALU operation, and either Read
data 2 or a sign-extended immediate for the ALU.
4. Memory access: The control lines set in this stage are Branch, MemRead, and MemWrite.
The branch equal, load, and store instructions set these signals, respectively. Recall that
PCSrc in Figure 4.48 selects the next sequential address unless control asserts Branch and the
ALU result was 0.
5. Write-back: The two control lines are MemtoReg, which decides between sending the
ALU result or the memory value to the register file, and Reg-Write, which writes the chosen
www.Notesfree.in
www.Notesfree.in
value. Since pipelining the datapath leaves the meaning of the control lines unchanged, we
can use the same control values.
pipeline.
www.Notesfree.in
www.Notesfree.in UNIT V MEMORY & I/O SYSTEMS
Memory - Introduction
Computer memory is the storage space in the computer, where data is to be
processed and instructions required for processing are stored. The memory is
divided into large number of small parts called cells. Each location or cell has a
unique address, which varies from zero to memory size minus one.
Computer memory is of two basic type – Primary memory / Volatile
memory and Secondary memory / non-volatile memory. Random Access Memory
(RAM) is volatile memory and Read Only Memory (ROM) is non-volatile
memory.
MEMORY HIERARCHY
The memory hierarchy separates computer storage into a hierarchy based on
response time. Since response time, complexity, and capacity are related, the levels
may also be distinguished by their performance and controlling technologies.
Memory hierarchy affects performance in computer architectural design, algorithm
predictions, and lower level programming constructs involving locality of
reference.
The Principle of Locality: Program likely to access a relatively small portion of
the address space at any instant of time.
- Temporal Locality: Locality in Time
- Spatial Locality: Locality in Space
PIT 1 UNIT 5
www.Notesfree.in
wTwemw.pNoorteaslfrLeeo.icnality (Locality in Time): If an item is referenced, it
PIT 2 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 3 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 4 UNIT 5
www.Notesfree.in
wlowstw, .tNhoa it is known as volatile memory. Reading and writing in RAM is
The SRAM address line is operated for opening and closing the switch and to
control the T5 and T6 transistors permitting to read and write. For read
operation the signal is applied to these address line then T5 and T6 gets on, and
PIT 5 UNIT 5
www.Notesfree.in
wwwth.Ne obtietsfvraeelu.ien is read from line B. For the write operation, the signal
is employed
to B bit line, and its complement is applied to B’.
DRAM TECHNOLOGY
DRAM (Dynamic Random Access Memory) is also a type of RAM which
is constructed using capacitors and few transistors. The capacitor is used for
storing the data where bit value 1 signifies that the capacitor is charged and a bit
value 0 means that capacitor is discharged. Capacitor tends to discharge, which
result in leaking of charges.
The dynamic term indicates that the charges are continuously leaking even in
the presence of continuous supplied power that is the reason it consumes more
power. To retain data for a long time, it needs to be repeatedly refreshed which
requires additional refresh circuitry.
Due to leaking charge DRAM loses data even if power is switched on. DRAM
is available in the higher amount of capacity and is less expensive. It requires
only a single transistor for the single block of memory.
Working of typical DRAM cell:
At the time of reading and writing the bit value from the cell, the address line is
activated. The transistor present in the circuitry behaves as a switch that is
closed (allowing current to flow) if a voltage is applied to the address line and
open (no current flows) if no voltage is applied to the address line. For the write
operation, a voltage signal is employed to the bit line where high voltage shows
1, and low voltage indicates 0. A signal is then used to the address line which
enables transferring of the charge to the capacitor.
When the address line is chosen for executing read operation, the transistor
turns on and the charge stored on the capacitor is supplied out onto a bit line
and to a sense amplifier.
PIT 6 UNIT 5
www.Notesfree.in
www.Notesfree.in
The sense amplifier specifies whether the cell contains a logic 1 or logic 2 by
comparing the capacitor voltage to a reference value. The reading of the cell
results in discharging of the capacitor, which must be restored to complete the
operation. Even though a DRAM is basically an analog device and used to store
the single bit (i.e., 0,1).
Key Differences Between SRAM and DRAM
1. SRAM is an on-chip memory whose access time is small while DRAM is an
off-chip memory which has a large access time. Therefore SRAM is faster than
DRAM.
2. DRAM is available in larger storage capacity while SRAM is of smaller size.
3. SRAM is expensive whereas DRAM is cheap.
4. The cache memory is an application of SRAM. In contrast, DRAM is used in
main memory.
5. DRAM is highly dense. As against, SRAM is rarer.
6. The construction of SRAM is complex due to the usage of a large number of
transistors. On the contrary, DRAM is simple to design and implement.
7. In SRAM a single block of memory requires six transistors whereas DRAM
needs just one transistor for a single block of memory.
PIT 7 UNIT 5
www.Notesfree.in
w8.wwPo.Nwoetersfcreoen.isnumption is higher in DRAM than SRAM. SRAM
PIT 8 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 9 UNIT 5
www.Notesfree.in
CACHE MEMORY:
A Cache is a small and very fast temporary storage memory. It is designed to
speed up the transfer of data and instructions. It is located inside or close to the CPU chip. It is fast
The data and instructions are retrieved from RAM when CPU uses them for the first time. A copy o
PIT 10 UNIT 5
www.Notesfree.in
wwwT.iNmoeteesffrfeiec.iienncy of using cache memories results from the locality
PIT 11 UNIT 5
www.Notesfree.in
wwwre.qNuoetesstferdee.iinnformation is fetched from the cache. The actions
concerned with a
read with a hit are shown in the figure below.
PIT 12 UNIT 5
www.Notesfree.in
wcawcwhe.Nolet blocks containing the requested word are fetched to the cache
memories at both levels. The size of the cache block at the first level is from 8 to
several tens of bytes (a number must be a power of 2). The size of the block in the
second level cache is many times larger than the size of the block at the first level.
The cache memory can be connected in different ways to the processor and the
main memory:
As an additional subsystem connected to the system bus that connects the
processor with the main memory,
As a subsystem that intermediates between the processor and the main memory,
As a separate subsystem connected with the processor, in parallel regarding the
main memory.
Categories of Cache Misses
We can subdivide cache misses into one of three categories:
A compulsory miss (or cold miss) : It is also known as cold start misses or first
references misses. These misses occur when the first access to a block happens.
Block must be brought into the cache.
A conflict miss: It is also known as collision misses or interference misses.
These misses occur when several blocks are mapped to the same set or block
frame. These misses occur in the set associative or direct mapped block
placement strategies.
A capacity miss: These misses occur when the program working set is much
larger than the cache capacity. Since Cache cannot contain all blocks needed for
program execution, so cache discards these blocks.
MEASURING AND IMPROVING CACHE PERFORMANCE
PIT 13 UNIT 5
www.Notesfree.in
www.NWoteesfbreeeg.iinn by examining ways to measure and analyze cache
performance. CPU time can be divided into the clock cycles that the CPU spends
executing the program and the clock cycles that the CPU spends waiting for the
memory system. Measuring the Cache Performance
We assume that the costs of cache accesses that are hits are part of the
normal CPU execution cycles. Thus
The read-stall cycles can be defined in terms of the number of read accesses
per program, the miss penalty in clock cycles for a read, and the read miss rate:
Since the write buffer stalls depend on the proximity of writes, and not just
the frequency, it is not possible to give a simple equation to compute such stalls.
www.Notesfree.in
PIT 14 UNIT 5
www.Notesfree.in
www.NWotersiftre schemes also have potential additional stalls arising from the
need to write a cache block back to memory when the block is replaced. In most
write-through cache organizations, the read and write miss penalties are the same.
If we assume that the write buffer stalls are negligible, we can combine the reads
and writes by using a single miss rate and the miss penalty:
During program execution, data can move from one location to another, and
possibly be duplicated.
Mapping Function
The correspondence between the main memory blocks and those in the
cache is specified by a mapping function.
The different Cache mapping techniques are as follows:-
1) Direct Mapping
www.Notesfree.in
www.NWotersiftre schemes also have potential additional stalls arising from the
PIT 15 UNIT 5
www.Notesfree.in
w2)wAws.Nsoocteisaftrei
v3)e.Set
einMapping
Associative Mapping
Consider a cache consisting of 128 blocks of 16 words each, for total of 2048(2K)
works and assume that the main memory is addressable by 16 bit address. Main
memory is 64K which will be viewed as 4K blocks of 16 works each.
(1) Direct Mapping:-
The simplest way to determine cache locations in which store Memory blocks is
direct Mapping technique.
In this block J of the main memory maps on to block J modulo 128 of the
cache. Thus main memory blocks 0,128,256,….is loaded into cache is stored at
block 0. Block 1,129,257,….are stored at block 1 and so on.
Placement of a block in the cache is determined from memory address. Memory
address is divided into 3 fields, the lower 4-bits selects one of the 16 words in a
block.
When new block enters the cache, the 7-bit cache block field determines the
cache positions in which this block must be stored.
The higher order 5- in cache. They identify which of the 32 blocks that are
mapped into this cache bits of the memory address of the block are stored in 5
tag bits associated with its location position are currently resident in the cache.
Advantages: It is easy to implement.
PIT 16 UNIT 5
www.Notesfree.in
wDwraww.Nbo Since more than one memory block is mapped onto a given
cache block position, contention may arise for that position even when the cache is
not full. Contention is resolved by allowing the new block to overwrite the
currently resident block. This method is not very flexible.
(2) Fully Associative Mapping:-
www.Notesfree.in
wwwm.aNpopteisnfgreeis.ineased, at the same time; hardware cost is reduced by
decreasing the
size of associative search.
For a cache with two blocks per set. In this case, memory block 0, 64, 128,
…..,4032 map into cache set 0 and they can occupy any two block within this
set.
Having 64 sets means that the 6 bit set field of the address determines which set
of the cache might contain the desired block. The tag bits of address must be
associatively compared to the tags of the two blocks of the set to check if
desired block is present. This is known as two way associative search.
Advantages:
The contention problem of the direct-mapping is eased by having a few
choices for block placement. At the same time, the hardware cost is reduced by
decreasing the size of the associative search.
M - Way Set Associativity:
We can also think of all block placement strategies as a variation on set
associativity. The following figure shows the possible associativity structures for
an eight-block cache.
A direct-mapped cache is simply a one-way set-associative cache: each cache
entry holds one block and each set has one element.
PIT 18 UNIT 5
www.Notesfree.in
w wwA.Nfoutlelsyf-reaes.sinociative cache with m entries is simply an m-way
set-associative cache: it has one set with m blocks, and an entry can reside in
any block within that set.
www.Notesfree.in
wwwm.eNmotoesrf separation allows an extremely large virtual memory to be
Demand Paging
Virtual memory is commonly implemented by demand paging.
A demand-paging system is similar to a paging system with swapping.
Pages are loaded only on demand and not in advance.
Processes reside on secondary memory (which is usually a disk). When the
process is to be executed, it is swapped in to memory. Rather than swapping the
entire process into memory, however, use a lazy swapper. A lazy swapper
PIT 20 UNIT 5
www.Notesfree.in
wwwne.Nvoetressfwreaep.isn a page into memory unless that page
will be needed.
Basic Concepts
Instead of swapping in a whole process, the pager brings only those necessary
pages into memory. Thus, it avoids reading into memory pages that will not be
used anyway, decreasing the swap time and the amount of physical memory
needed.
PIT 21 UNIT 5
www.Notesfree.in
wwwpr.Nocoetesssfraene.dinthe page table to indicate that the page is now in memory.
PIT 22 UNIT 5
www.Notesfree.in
wBwaswic.NMoteestfhreoed.in:
Physical memory is broken into fixed-sized blocks called frames. Logical memory
is also broken into blocks of the same size called pages. When a process is to be
executed, its pages are loaded into any available memory frames from the backing
store. The backing store is divided into fixed-sized blocks that are of the same size
as the memory frames.
page-number page-offset
P d
m-n n
Where p is an index into the page table and d is the displacement within the page.
The hardware support for paging is illustrated in the above Figure. Every
address generated by the CPU is divided into two parts: a page number (p) and
a page offset (d). The page number is used as an index into a page table. The
page table contains the base address of each page in physical memory. This
base address is combined with the page offset to define the physical memory
address that is sent to the memory unit. The paging model of memory is shown
in the Figure given below.
The page size (like the frame size) is defined by the hardware. The size of a
page is typically a power of 2, varying between 512 bytes and 16 MB per page,
depending on the computer architecture. The selection of a power of 2 as a page
size makes the translation of a logical address into a page number and page
offset particularly easy.
PIT 23 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 24 UNIT 5
www.Notesfree.in
www.Notesfree.in
offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical
address 4 maps to physical address 24 (= (6 x 4) + 0). Logical address 13 maps to
physical address 9.
PIT 25 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 26 UNIT 5
www.Notesfree.in
wwwT.hNeotsetsafrnede.ainrd solution to this problem is to use a special, small,
fast lookup hardware cache, called translation look-aside buffer (TLB). The
TLB is associative, high-speed memory. Each entry in the TLB consists of two
parts: L a key (or tag) and a value. Typically, the number of entries in a TLB is
small, often numbering between 64 and 1,024.
PIT 27 UNIT 5
www.Notesfree.in
wPwRwO.NToEteCsfrTeeI.OinN in TLB
Protection bits are kept in the page table. The valid-invalid bit scheme can be used
for this purpose. When this bit is set to "valid," this value indicates that the
associated page is both legal and in memory. If the bit is set to "invalid," this value
indicates that the page either is not valid (that is, not in the logical address space of
the process), or is valid but is currently on the disk.
The page-table entry for a page that is brought into memory is set as usual, but
the page table entry for a page that is not currently in memory is simply marked
invalid, or contains the address of the page on disk. This situation is depicted in
the following figure.
PIT 28 UNIT 5
www.Notesfree.in
wwwm.eNmotoesrfyr,eer.ainther than an invalid address error as a result of an attempt
PIT 29 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 30 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 31 UNIT 5
www.Notesfree.in
wINwPwU.NTote/
sOfreUeT.inPUT SYSTEM
The main data-processing functions of a computer involve its CPU and external
memory. The CPU fetches instructions and data from memory, processes them,
and eventually stores the results back in memory.
The other system components like secondary memory, user interface devices,
and so on constitute the input/output (I/O) system. One of the basic features of a
computer is its ability to exchange data with other devices. The data transfer
rate of peripherals is much slower than that of the memory or CPU. The I/O
subsystem provides the mechanism for communications between CPU and the
outside world.
The connection between the I/O devices, processor, and memory are
historically called buses, although the term mean shared parallel wires and most
I/O connections today are closer to dedicated serial lines. Communication
among the devices and the processor uses both interrupts and protocols on the
interconnection.
I/O devices are incredibly diverse. Three characteristics are useful in
organizing this wide variety:
1. Behavior: Input (read once), output (write only, cannot be read), or storage
(can be read and usually rewritten).
PIT 32 UNIT 5
www.Notesfree.in
w2.wwP.aNrottnese a human or a machine is at the other end of the I/O device,
I/O Interface
The address decoder, the data and status registers, and the control circuitry
required to coordinate I/O transfers constitute the device’s interface circuit.
The address decoder enables the device to recognize its address when this
address appears on the address lines. The data register holds the data being
transferred to or from the processor.
PIT 33 UNIT 5
www.Notesfree.in
www.Notesfree.in
The status register contains information relevant to the operation of the I/O
device. Both the data and status registers are connected to the data bus and
assigned unique addresses.
I/O devices operate at speeds that are vastly different from that of the processor.
When a human operator is entering characters at a keyboard, the processor is
capable of executing millions of instructions between successive character
entries. An instruction that reads a character from the keyboard should be
executed only when a character is available in the input buffer of the keyboard
interface. Also, we must make sure that an input character is read only once.
For an input device such as keyboard, a status flag, SIN, is included in the
interface circuit as part of the status register. This flag is set to 1 when a
character is entered at the keyboard and cleared to 0 once this character is read
by the processor. A similar procedure can be used to control output operations
using an output status flag, SOUT.
I/O Control Methods
Input-Output operations are distinguished by the extent to which the CPU is
involved in their execution. I/O operation refers to a data transfer between an IO
device and memory, or between an IO device and the CPU.
Commonly used mechanisms for implementing IO operations
PIT 34 UNIT 5
www.Notesfree.in
wThwewr.eNoatreesfrteher.eine commonly used methods used for implementing
PIT 35 UNIT 5
www.Notesfree.in
wthwewin.Nteorterusfpretse.ihnas been serviced, the CPU can resume execution of
the interrupted
program.
PIT 36 UNIT 5
www.Notesfree.in
wwwD.MNoAtestfrraene.sinfers are performed by a control circuit that is part of
the I/O device
interface. We refer to this circuit as a DMA controller. The DMA controller
performs the functions that would normally be carried out by the processor
when accessing the main memory.
For each word transferred, it provides the memory address and all the bus
signals that control data transfer. Since it has to transfer blocks of data, the
DMA controller must increment the memory address for successive words
and keep track of the number of transfers.
Although a DMA controller can transfer data without intervention by the
processor, its operation must be under the control of a program executed by the
processor.
To initiate the transfer of a block of words, the processor sends the starting
address, the number of words in the block, and the direction of the transfer. On
receiving this information, the DMA controller proceeds to perform the
requested operation. When the entire block has been transferred, the controller
informs the processor by raising an interrupt signal.
While a DMA transfer is taking place, the program that requested the transfer
cannot continue, and the processor can be used to execute another program.
After the DMA transfer is completed, the processor can return to the program
that requested the transfer. I/O operations are always performed by the
operating system of the computer in response to a request from an application
program.
The OS is also responsible for suspending the execution of one program and
starting another. Thus, for an I/O operation involving DMA, the OS puts the
program that requested the transfer in the Blocked state initiates the DMA
operation, and starts the execution of another program. When the transfer is
completed, the DMA controller informs the processor by sending an interrupt
PIT 37 UNIT 5
www.Notesfree.in
wwwre.qNuoetesstf.reIne.irnesponse, the OS puts the suspended program in the
runnable state so
that it can be selected by the scheduler to continue execution.
Cycle Stealing –The DMA controller must use the bus only when the
processor does not need it, or it must force the processor to suspend operation
temporarily. This technique is referred to as cycle stealing. It allows DMA
controller to transfer one data word at a time after which it must return control
of the buses to the CPU
DMA Controller
A simple DMA controller is a standard component in modern PCs, and many
bus-mastering I/O cards contain their own DMA hardware.
Handshaking between DMA controllers and their devices is accomplished
through two wires called the DMA-request and DMA-acknowledge wires.
While the DMA transfer is going on the CPU does not have access to the PCI
bus( including main memory ), but it does have access to its internal registers
and primary and secondary caches.
DMA can be done in terms of either physical addresses or virtual addresses that
are mapped to physical addresses. The latter approach is known as Direct
Virtual Memory Access, DVMA, and allows direct data transfer from one
memory-mapped device to another without using the main memory chips.
The controller is integrated into the processor board and manages all DMA data
transfers. Transferring data between system memory and an I/O device requires
two steps.
i. Data goes from the sending device to the DMA controller and then to the
receiving device. The microprocessor gives the DMA controller the location,
destination, and amount of data that is to be transferred. Then the DMA
controller transfers the data, allowing the microprocessor to continue with
other processing tasks. When a device needs to use the Micro Channel bus to
PIT 38 UNIT 5
www.Notesfree.in
www.Nsoetensdfreoer.irneceive data, it competes with all the other devices that are
trying to
gain control of the bus. This process is known as arbitration.
ii. The DMA controller does not arbitrate for control of the BUS instead; the
I/O device that is sending or receiving data (the DMA slave) participates in
arbitration.
DMA controller takes over the buses to manage the transfer directly between
the I/O device and memory
Bus Request (BR) –used by the DMA controller to request the CPU to claim or
give up control of the buses.
CPU activates bus grant to inform the external DMA that the buses are in high
impedance state.
Burst transfer –block sequence consisting of memory words is transferred in a
continuous bus when DMA controller is the master.
INTERRUPTS
An interrupt is a signal to the processor emitted by hardware or software
indicating an event that needs immediate attention. An interrupt alerts the
processor to a high-priority condition requiring the interruption of the current
code the processor is executing.
The processor responds by suspending its current activities, saving its state,
and executing a function called an interrupt handler (or an interrupt service
routine, ISR) to deal with the event. This interruption is temporary, and, after
the interrupt handler finishes, the processor resumes normal activities.
PIT 39 UNIT 5
www.Notesfree.in
wTwhewr.Ne types of interrupts: hardware interrupts and software
interrupts.
Hardware interrupts are used by devices to communicate that they require
attention from the operating system. Internally, hardware interrupts are
implemented using electronic alerting signals that are sent to the processor
from an external device, which is either a part of the computer itself, such as a
disk controller, or an external peripheral.
For example, pressing a key on the keyboard or moving the mouse triggers
hardware interrupts that cause the processor to read the keystroke or mouse
position. The act of initiating a hardware interrupt is referred to as an interrupt
request (IRQ).
A software interrupt is caused either by an exceptional condition in the
processor itself, or a special instruction in the instruction set which causes an
interrupt when it is executed. The former is often called a trap or exception
(For example a divide-by-zero exception) and is used for errors or events
occurring during program execution.
Each interrupt has its own interrupt handler. The number of hardware
interrupts is limited by the number of interrupt request (IRQ) lines to the
processor, but there may be hundreds of different software interrupts.
Interrupts are a commonly used technique for computer multitasking,
especially in real-time computing. Such a system is said to be interrupt-driven.
Interrupts Handling
� The interrupt mechanism allows devices to signal the CPU and to force
execution of a particular piece of code.
� When an interrupt occurs, the program counter’s value is changed to point to
an interrupt handler routine (also commonly known as a device driver) that
takes care of the device.
PIT 40 UNIT 5
www.Notesfree.in
ww🞂w.NTohteesfirnetee.irnface between the CPU and I/O device includes the
following signals
for interrupting:
� The I/O device asserts the interrupt request signal when it wants service
from the CPU; and
� The CPU asserts the interrupt acknowledge signal when it is ready to
handle the I/O device’s request.
� The interrupt mechanism allows devices to signal the CPU and to force
execution of a particular piece of code.
� When an interrupt occurs, the program counter’s value is changed to point
to an interrupt handler routine (also commonly known as a device driver)
that takes care of the device.
� The interface between the CPU and I/O device includes the following
signals for interrupting:
■ the I/O device asserts the interrupt request signal when it wants service
from the CPU; and
PIT 41 UNIT 5
www.Notesfree.in
www.N■oteTsfhreee.CinPU asserts the interrupt acknowledge signal when it is
PIT 42 UNIT 5
www.Notesfree.in
www.Notesfree.in
� The CPU will call the interrupt handler associated with this priority; that
handler does not know which of the devices actually requested the interrupt.
The handler uses software polling to check the status of each device: In this
example, it would read the status registers of 1, 2, and 3 to see which of
them ready and requesting service is. The given example illustrates how
priorities affect the order in which I/O requests are handled.
Vectors provide flexibility in a different dimension, namely, the ability to
define the interrupt handler that should service a request from a device. Figure
shows the hardware structure required to support interrupt vectors. In addition
to the interrupt request and acknowledge lines, additional interrupt vector lines
run from the devices to the CPU.
PIT 43 UNIT 5
www.Notesfree.in
www.Notesfree.in
After a device’s request is acknowledged, it sends its interrupt vector over those
lines to the CPU. The CPU then uses the vector number as an index in a table
stored in memory as shown in Figure 3.5. The location referenced in the
interrupt vector table by the vector number gives the address of the handler.
There are two important things to notice about the interrupt vector mechanism.
First, the device, not the CPU, stores its vector number. In this way, a device
can be given a new handler simply by changing the vector number it sends,
without modifying the system software. For example, vector numbers can be
changed by programmable switches.
The second thing to notice is that there is no fixed relationship between vector
numbers and interrupt handlers. The interrupt vector table allows arbitrary
relationships between devices and handlers. The vector mechanism provides
great flexibility in the coupling of hardware devices and the software routines
that service them.
Most modern CPUs implement both prioritized and vectored interrupts.
Priorities determine which device is serviced first, and vectors determine what
routine is used to service the interrupt. The combination of the two provides a
rich interface between hardware and software.
BUS STRUCTURE
PIT 44 UNIT 5
www.Notesfree.in
wAwcwo.mNoptuestferrees.iynstem is made up of 3 major components. Central
Processing Unit
(CPU) that processes data, Memory Unit that holds data for processing and the
Input and Output Unit that is used by the user to communicate with the computer.
But how do these different components of a CPU communicate with each other?
They use a special electronic communication system called the BUS. The
computer bus carries lots of information using numerous pathway called circuit
lines. The System bus consists of data bus, address bus and control bus
Data bus- A bus which carries data to and from memory/IO is called as data
bus
Address bus- This is used to carry the address of data in the memory and its
width is equal to the number of bits in the MAR of the memory.
For example, if a computer memory of 64K has 32 bit words then the computer
will have a data bus of 32 bits wide and the address bus of 16 bits wide.
Control Bus- carries the control signals between the various units of the
computer. Ex: Memory Read/write, I/O Read/write
Two types of Bus organizations:
Single Bus organization
Two bus Organization
Single Bus Architecture
Three units share the single bus. At any given point of time, information can
be transferred between any two units
Here I/O units use the same memory address space ( Memory mapped I/O)
PIT 45 UNIT 5
www.Notesfree.in
www .NSotoesnfroees.pinecial instructions are required to address the I/O, it can be
accessed
like a memory location
Since all the devices do not operate at the same speed, it is necessary to
smooth out the differences in timings among all the devices A common
approach used is to include buffer registers with the devices to hold the
information during transfers
Ex: Communication between the processor and printer
Two Bus Architecture
PIT 46 UNIT 5
www.Notesfree.in
w wwB.uNsotAesrfbreiter.iantion Mechanism between the System buses are shared
between the controllers and an IO processor and multiple controllers that have
to access the bus, but only one of them can be granted the bus master status at
any one instance
Bus master has the access to the bus at an instance controller and an IO
processor and multiple controllers that have to access the bus, but only one of
them can be granted the bus master status at any one instance
Bus master has the access to the bus at an instance.
There are two approaches to bus arbitration:
1. Centralized bus arbitration – A single bus arbiter performs the required
arbitration.
2. Distributed bus arbitration – All devices participate in the selection of the
next bus master.
Three methods in Centralized bus arbitration process
Daisy Chain method
Fixed Priority or Independent Bus Requests and Grant method
Polling or Rotating Priority method
DAISY CHAINING
o It is a centralized bus arbitration method. During any bus cycle, the bus
master may be any device – the processor or any DMA controller unit,
connected to the bus.
o Bus control passes from one bus master to the next one, then to the next and
so on. That is from controller units C0 to C1, then to C2, then U3, and so on.
PIT 47 UNIT 5
www.Notesfree.in
www.Notesfree.in
Advantages
Simplicity and Scalability.
The user can add more devices anywhere along the chain, up to a certain
maximum value.
Disadvantages
The value of priority assigned to a device is depends on the position of
master bus.
PIT 48 UNIT 5
www.Notesfree.in
ww w.NPotreospfraege.aintion delay is arises in
this method.
If one device fails then entire system will stop working.
POLLING OR ROTATING PRIORITY METHOD
PIT 49 UNIT 5
www.Notesfree.in
ww w.NAotdesdfirneeg.inbus masters is different as increases the number of
address lines of
the circuit.
FIXED PRIORITY or INDEPENDENT REQUEST AND GRANT METHOD
Controller separate BR signals, BR0, BR1,…, BRn.
Separate BG signals, BG0, BG1, …, BGn for the controllers.
PIT 50 UNIT 5
www.Notesfree.in
www.Notesfree.in
When one or more devices request control of the bus, they assert the start
arbitration signal and place their 4-bit identification numbers on arbitration
lines through ARB0 to ARB3.
Each device compares the code and changes its bit position accordingly. It
does so by placing a 0 at the input of their drive.
The distributed arbitration is highly reliable because the bus operations are not
dependent on devices.
INTERFACE CIRCUITS
An I/O interface consists of the circuitry required to connect an I/O device to a
computer bus. On one side of the interface, we have bus signals. On the other side,
we have a data path with its associated controls to transfer data between the
interface and the I/O device – port. We have two types:
Parallel port
Serial port
A parallel port transfers data in the form of a number of bits (8 or 16)
simultaneously to or from the device. A serial port transmits and receives data one
bit at a time. Communication with the bus is the same for both formats. The
PIT 51 UNIT 5
www.Notesfree.in
wcownwv.eNrositeosnfrefero.inm the parallel to the serial format, and vice versa,
takes place inside the interface circuit. In parallel port, the connection between the
device and the computer uses a multiple-pin connector and a cable with as many
wires. This arrangement is suitable for devices that are physically close to the
computer. In serial port, it is much more convenient and cost-effective where
longer cables are needed.
Typically, the functions of an I/O interface are:
• Provides a storage buffer for at least one word of data
• Contains status flags that can be accessed by the processor to determine whether
the buffer is full or empty
• Contains address-decoding circuitry to determine when it is being addressed by
the processor
• Generates the appropriate timing signals required by the bus control scheme
• Performs any format conversion that may be necessary to transfer data between
the bus and the I/O device, such as parallel-serial conversion in the case of a serial
port
Parallel Port
Input Port
Example1: Keyboard to Processor
Observe the parallel input port that connects the keyboard to the processor. Now,
whenever the key is tapped on the keyboard an electrical connection is established
that generates an electrical signal. This signal is encoded by the encoder to convert
it into ASCII code for the corresponding character pressed at the keyboard (as
shown in below figure)
PIT 52 UNIT 5
www.Notesfree.in
www.Notesfree.in
PIT 53 UNIT 5
www.Notesfree.in
wThwew.iNnoptuestfraeen.din output interfaces can be combined into a single
Serial Port
Opposite to the parallel port, the serial port connects the processor
to devices that transmit only one bit at a time. Here on the device
side, the data is transferred in the bit-serial pattern, and on the
processor side, the data is transferred in the bit-parallel pattern.
PIT 54 UNIT 5
www.Notesfree.in
The transformation of the format from serial to parallel i.e., from device to proc
The serial interface port connected to the processor via system bus functions
similarly to the parallel port. The status and control block has two status flags SIN
and SOUT. The SIN flag is set to 1 when the I/O device inputs the data into the
DATA IN register through the input shift register and the SIN flag is cleared to 0
when the processor reads the data from the DATA IN register.
When the value of the SOUT register is 1 it indicates to the processor that the
DATA OUT register is available to receive new data from the processor. The
processor writes the data into the DATA OUT register and sets the SOUT flag to 0
PIT 55 UNIT 5
www.Notesfree.in
wanwdww.Nhoetensftrheee.ionutput shift register reads the data from the DATA
OUT register sets
back SOUT to 1.
There are two techniques to transmit data using the encoding scheme.
1. Asynchronous Serial Transmission
2. Synchronous Serial Transmission
PIT 56 UNIT 5
www.Notesfree.in
w– wPwC.NI o(Pteesfrriepe.hineral
Component Interconnect)
– SCSI (Small Computer System Interface)
– USB (Universal Serial Bus)
PCI (Peripheral Component Interconnect)
Host, main memory and PCI bridge are connected to disk, printer and Ethernet
interface through PCI bus. At any given time, one device is the bus master. It has
the right to initiate data transfers by issuing read and write commands. A master is
called an initiator in PCI terminology. This is either processor or DMA controller.
The addressed device that responds to read and write commands is called a target.
A complete transfer operation on the bus, involving an address and a burst of data,
is called a transaction. Device configuration is also discussed.
SCSI Bus
It is a standard bus defined by the American National Standards Institute (ANSI).
A controller connected to a SCSI bus is an initiator or a target. The processor sends
a command to the SCSI controller, which causes the following sequence of events
to take place:
• The SCSI controller contends for control of the bus (initiator).
• When the initiator wins the arbitration process, it selects the target controller and
hands over control of the bus to it.
• The target starts an output operation. The initiator sends a command specifying
the required read operation.
• The target sends a message to the initiator indicating that it will temporarily
suspends the connection between them. Then it releases the bus. The target
controller sends a command to the disk drive to move the read head to the first
sector involved in the requested read operation.
• The target transfers the contents of the data buffer to the initiator and then
suspends the connection again.
PIT 57 UNIT 5
www.Notesfree.in
w• wTwhe.Ntoatregsferteec.oinntroller sends a command to the disk drive to
perform another seek
operation.
• As the initiator controller receives the data, it stores them into the main memory
using the DMA approach.
• The SCSI controller sends an interrupt to the processor to inform it that the
requested operation has been completed.
Today, there are millions of different USB devices that can be connected to your
computer. The list below contains just a few of the most common.
Digital Camera
External drive
iPod or other MP3 players
Keyboard
Keypad
www.Notesfree.in
w• wTwhe.Ntoatregsferteec.oinntroller sends a command to the disk drive to
PIT
perform another seek 58 UNIT 5
www.Notesfree.in
www .NMoteiscfrroepe.hinone
Mouse
Printer
Joystick
Scanner
Smartphone
Tablet
Webcams
USB Architecture
When multiple I/O devices are connected to the computer through USB they all are
organized in a tree structure. Each I/O device makes a point-to-point connection
and transfers data using the serial transmission format we have discussed serial
transmission in our previous content ‘interface circuit’.
As we know a tree structure has a root, nodes and leaves. The tree structure
connecting I/O devices to the computer using USB has nodes which are also
referred to as a hub. Hub is the intermediatory connecting point between the I/O
devices and the computer. Every tree has a root here, it is referred to as the root
hub which connects the entire tree to the hosting computer. The leaves of the tree
here are nothing but the I/O devices such as a mouse, keyboard, camera, speaker.
PIT 59 UNIT 5
www.Notesfree.in
www.Notesfree.in
USB Protocols
All information transferred over the USB is organized in packets, where a packet
consists of one or more bytes of information
The information transferred on the USB can be divided into two broad
categories: control and data
Control packets perform such tasks as addressing a device to initiate data
transfer, acknowledging that data have been received correctly, or indicating an
error
Data packets carry information that is delivered to a device. For example, input
and output data are transferred inside data packets
USB Device States
A USB device can have several possible states as described below:
• Attached State: This state occurs when the device is attached to the Host.
PIT 60 UNIT 5
www.Notesfree.in
w• wPwo.wNeorteesdfreSe.tinate: After the device is attached, the Host
provides power to the device if it does not have its own power supply. The
device should not draw more than 100 mA in this state.
• Default State: This state occurs when the device is reset and has not been
assigned a unique address. In this state the device uses default control pipe for
communication and default address 0.
• Addressed State: The USB device enters this state after it gets a unique address
which is used for future communications.
• Configured: When the Host obtains required information from the device, it
loads the appropriate driver for the device. The host configures the device by
selecting a configuration. The device is now ready to do the operations it was
meant for.
• Suspended State: The USB device enters the suspended state when the bus
remains idle for more than 3mS. In this state, the device must not draw more than
500uA of current.
****************
PIT 61 UNIT 5
www.Notesfree.in