mereg
mereg
1.1 Introduction
Integers are represented using BNS in the digital systems implemented on computers,
micro-controllers or on FPGAs. Any number is represented by two symbols which
are ‘0’ and ‘1’ in BNS. A number X of length n is represented in BNS as
Here, each digit or bit (xi ) takes value from the set {0, 1} and an integer is represented
using n-bits. The value of n is important in correct representation of an integer. This
decides the accuracy of a digital system. The value of the integer can be evaluated
from the binary representation as
n−1
X = xn−1 2 n−1
+ xn−2 2 n−2
+ .... + x1 2 + x0 2 =
1 0
x i 2i (1.2)
i=0
In the signed magnitude representation, sign and magnitude of a number are repre-
sented separately. Sign is represented by a sign bit. For an n-bit binary number, 1-bit
is reserved for sign and (n − 1) bits are reserved for magnitude. In general BNS,
MSB is used for sign bit and logic 1 on this bit represents the negative numbers. This
format is shown in Fig. 1.1. The maximum number that can be represented in this
number system is
X max = 2n−1 − 1 (1.3)
This number can be both negative and positive depending upon the MSB bit. If n = 8
then X max = 127. In this representation, zero does not have unique representation.
Signed magnitude representation has a symmetric range of numbers. This means
that every positive number has its negative counterpart. The integer X = −9 can be
represented in signed magnitude representation as X = 10001001. Here, 7 bits are
used to represent the integer and the MSB is 1 as the integer is negative.
Example
Addition of two numbers X 1 = −9 and X 2 = 8 can be done in the following way:
1 0 0 0 1 0 0 1 X1
– 0 0 0 0 1 0 0 0 X2
0 0 0 0 0 0 0 1 Y
MSB (n-1)-bits
(Sign) (Magnitude)
bits of the operands decide the operation to be performed on the operands. Table 1.1
shows the addition/subtraction operation depending on the MSB bit of the operands.
If the sign of the two operands are same then the sign of the output is also same.
In other case, comparison of the two operands is required.
In this representation, zero does not have unique representation as in case of signed
magnitude representation. This representation also has the symmetric range. The
MSB differentiates the positive and negative numbers. The range of one’s comple-
ment representation for n = 8 is −127 ≥ X ≥ 127. When adding a positive number
X and a negative number −Y represented in one’s complement. The result is
Example
Addition of two numbers X 1 = −9 and X 2 = 8 represented in one’s complement
can be done in the following way:
1 1 1 1 0 1 1 0 X1
+ 0 0 0 0 1 0 0 0 X2
1 1 1 1 1 1 1 0 Y
1 1 1 1 0 1 1 0 X1
+ 0 0 0 0 1 0 1 0 X2
cout = 1 0 0 0 0 0 0 0 0 Y
Here, the result must be 1 but we get 0 instead. The correction is done by adding the
cout with the result.
The range is asymmetric as the number −2n−1 (100...000) does not have the positive
counterpart. If a complement operation is attempted on 2n−1 (100...000) then the
1.3 Representation of Numbers 5
result will be same. Thus in a design with fixed word length, this value is ignored and
symmetric range is used. The usable range of two’s complement representation for
n = 8 is same as that of one’s complement representation. The subtraction operation
in two’s complement representation can be expressed as
X + 2n − Y = 2n + (X − Y ) (1.8)
Example
Addition of two numbers X 1 = −9 and X 2 = 8 represented in two’s complement
can be done in the following way:
1 1 1 1 0 1 1 1 X1
+ 0 0 0 0 1 0 0 0 X2
1 1 1 1 1 1 1 1 Y
1 1 1 1 0 1 1 1 X1
+ 0 0 0 0 1 0 1 0 X2
cout = 1 0 0 0 0 0 0 0 1 Y
In this example, result is +1 and carry out (cout ) is generated. The addition and
subtraction process should be done in such a way that the overflow never occur
otherwise wrong result will be produced. Consider the following example:
1 1 0 0 1 −7
+ 1 0 1 1 0 −10
0 1 1 1 1 15
The actual result is −17 but here 15 is produced. Thus it should be taken care that
the final result should not exceed the maximum number that can be represented in
two’s complement representation.
6 1 Binary Number System
In the fixed point data format base architectures, the data width to represent the
numbers is fixed. Thus the number of bits that are reserved to represent fractional
part and integer part are also fixed. The fixed point data format is shown in Fig. 1.2.
The decimal equivalent of a binary number in this format is computed as
X = xm−1 2m−1 + ... + x1 21 + x0 20 · x−1 2−1 + x−2 2−2 + ... + x−(n−m) 2−(n−m)
m−1 (n−m)
= x i 2i + x−i 2−i (1.9)
i=0 i=1
Here, m-bits are reserved to represent the integer part and (n − m)-bits are reserved
for fractional part. For example, if the data length is 16 bit and value of m is 6 then 6
bits are reserved for the integer part and rest of the bits are reserved for the fractional
part.
Majority of practical digital systems implemented on any digital platform use this
data format to represent fractional numbers. Now, to represent signed or unsigned
numbers, any of the above techniques (signed magnitude representation, one’s com-
plement representation, and two’s complement representation) can be used. But
mostly the two’s complement representation and signed magnitude representation
are used. In the integer field, (m − 1)-bits are usable as the sign of the number is
represented using the MSB bit. An example to represent a fractional number in fixed
point data format using two’s complement representation is shown below:
m-bits (n-m)-bits
(Integer Part) (Fractional Part)
Any number beyond this maximum number cannot be represented with n = 16 and
m = 6. If the value of m decreases then the resolution of the number increases and
range of representable numbers reduces. This means the gap between two consec-
utive numbers decreases. But if the value of m increases then range increases but
resolution decreases. Designers have to carefully select the value of n and m. All
the architectures discussed in the book are based on the fixed point data format as it
provides an easy means to represent the fractional numbers. But this representation
has some limitations due to its lower range.
Floating point data format is another technique to represent the fractional numbers.
Floating point data format increases the range of the numbers that can be repre-
sented. Many dedicated processors or micro-controllers use this format to represent
the fractional numbers. Floating point data format covers a wide range of numbers
to achieve better accuracy compared to the accuracy achieved in case of fixed point
representation. The concept of floating point data format comes from the representa-
tion of real fractional numbers. For example, the fractional number −9.875 can also
be represented as
− 9.875 = −1 × 9875 × 10−3 (1.11)
Other representations are also possible. Thus floating point representation is not
unique. The general format of the floating point representation is
X = S.M.r Eb (1.12)
So, a floating point number has three fields, viz., Sign (S), Mantissa (M) and Exponent
(E). Here, r represents the radix and its value is 10 for decimal numbers. Similarly,
8 1 Binary Number System
the binary numbers also can be represented in this format where r = 2. Sign of a
number is identified by a single digit. If the Sign bit is 1 then the number is negative
else it is a positive number. In the mantissa part all the digits of the number are
present. Number of bits reserved for mantissa part defines the accuracy.
The floating point data format according to IEEE 754 standard is shown in Fig. 1.3
for 16-bit word length. Here, 11 bits are reserved for mantissa part and 4 bits are
reserved for exponent field. Mantissa part is represented as unsigned version. In
the exponent field, unique representation must be adopted to differentiate positive
and negative exponents. If the two’s complement representation is used, then the
negative exponent will be greater than the positive exponent. Thus a bias is added to
the exponent to generate biased exponent (E b ). For example, if 4 bits are allocated
for exponent field then the bias value is 7 (23 − 1). In general, for p bits the bias
is 2( p−1) − 1. The value of E b is obtained as E b = bias + E. In this way, the value
of E = 0 is represented as E b = bias. Exponent value of E = −1 is represented as
E b = 6 and the exponent value of E = 1 is represented as E b = 8. In this way the
negative and positive exponents are distinguished.
Example
Represent the fractional number X = −9.875 in floating point data format for 16-bit
word length.
1. Represent the magnitude of the fractional number in binary. abs(x) = 9.875 =
1001_111.
2. First decide the sign bit. The sign bit is 1 as X is negative.
3. In the mantissa part 11 bits are reserved. The first step to represent the mantissa part
is to normalize the binary representation. After normalization and adding zeros
the result is 1_00111100000. The normalization is done to restrict the mantissa
part between 1 and 2. Here, the number is shifted 3 bits in the left side. The MSB
may not be included in the final version of the mantissa. The MSB is called as
hidden bit and this bit is ignored in the mantissa representation according to the
IEEE 754 standard.
4. As the mantissa part is 3 bits left shifted then the value of exponent is E = +3
and the value of the biased exponent is E b = 1010 = 10102 .
5. The floating point representation is 1_1010_00111100000.
A detailed discussion on the floating point numbers and floating point architectures
is given in Chap. 12.
Chapter 3
Basic Combinational Circuits
3.1 Introduction
3.2 Addition
Addition operation is the most important basic arithmetic operation which is mostly
used in implementing the digital systems. A two-input adder circuit receives input
operands a and b and generates two outputs s and cout . Here, s is the summation
output, and cout is the carry out representing that the overflow is occurred. The truth
table for two-input addition is shown in Table 3.1. The two-input addition circuit is
commonly known as Half Adder (HA). The logical expression for the HA derived
from the truth table is
s =a⊕b (3.1)
cout = a.b (3.2)
a
Fig. 3.1 Gate level logic
diagram for the HA s
b
cout
The circuit diagram for the HA is shown in Fig. 3.1. A Full Adder (FA) is an arithmetic
circuit that receives three inputs and generates two outputs. In addition operation of
two-input operands, the HA circuit do not consider the carry input. But in the FA
circuit, a third input cin is considered and thus FA is called the complete adder. The
truth table of the FA is shown in Table 3.2. The logical expressions for the FA derived
from the truth table are shown below
s = a ⊕ b ⊕ cin (3.3)
cout = a.b + a.cin + b.cin (3.4)
The gate level logic diagram for the FA is shown in the Fig. 3.2. Here the FA is
implemented using two HA circuits.
3.3 Subtraction 41
3.3 Subtraction
d =a⊕b (3.5)
bout = ā.b (3.6)
The circuit diagram for the HS is shown in Fig. 3.3. A Full Substractor (FS) is
an arithmetic circuit that receives three inputs and generates two outputs. In the
subtraction operation of two-input operands, the HS circuit does not consider the
borrow in (bin ) input. But in the FS circuit, a third input bin is considered and thus
FS is called as complete subtractor. The truth table of the FS is shown in Table 3.4.
The logical expressions for the FS derived from the truth table are shown below
s = a ⊕ b ⊕ cin (3.7)
cout = ā.bin + ā.b + b.bin (3.8)
The gate level logic diagram for the FS is shown in Fig. 3.4. Here FS is implemented
using two HS circuits.
a
Fig. 3.3 Gate level logic
diagram for the HS d
b
bout
Previously we have discussed the addition operation between two 1-bit operands. The
addition operation between two n-bit operands can be performed using n FA blocks.
Here, a 4-bit adder which adds two 4-bit operands is discussed and it generates a
4-bit sum output and a carry out output. Each bit from the two operands is added
in parallel by 4 FA blocks. The architecture of this parallel binary adder is shown in
Fig. 3.5.
The first FA block receives the initial carry input which can be kept as 0. The first
FA block generates a carry out signal which is passed to the second FA block. The
second FA block then computes its sum and carry out signal. This way carry out
signal propagates to the last FA block. Thus this structure is known as Ripple Carry
Adder (RCA). This block has a delay of n.t F A for n bits where t F A is the delay of
one FA block. More about the fast adders are discussed in Chap. 7.
3.5 Controlled Adder/Subtractor 43
a3 b3 a2 b2 a1 b1 a0 b0 cin
cout s3 c3 s2 c2 s1 c1 s0
b3 b2 b1 b0
ctrl
a3 a2 a1 a0
c s3 c3 s2 c2 s1 c1 s0
3.6 Multiplexers
Multiplexer is a circuit which selects one signal between two or many signals based
on a control signal. A 2:1 Multiplexer circuit selects from two inputs. Multiplexer
circuits have many use in digital circuits for data sharing, address bus sharing, control
signal selection, etc. The Boolean expression for a simple 2:1 Multiplexer circuit is
shown below
The Multiplexer circuit chooses input x0 when the control signal s is low and selects
input x1 when the control signal is high. The details of Multiplexer circuits are
discussed in Chap. 2.
3.7 De-Multiplexers
y0 = s.x (3.10)
y1 = s.x (3.11)
Here, s is the control signal. The input signal x is passed to the output line y0 when
s is logic zero and x is connected to the output line y1 when s is logic one. The
architecture of 1:4 De-Multiplexer using 1:2 De-Multiplexer is shown in Fig. 3.7.
0
y2
1
0
1
y1
s1
0
y0
s0
3.9 Encoders 45
y0 y1 y2 y3
3.8 Decoders
A decoder circuit is used to change or decode a code into a set of signals. The decoder
receives n signals and produces 2n output signals. Only one output signal is at logic
level 1 at a time. There may be 2-4 decoder, 3-8 decoder or 4-16 decoder circuit. The
truth table for a 2-4 decoder is shown in Table 3.5.
The circuit diagram of 2-4 decoder is shown in Fig. 3.8. The circuit diagram is very
similar to that of De-Multiplexer. But in case of De-Multiplexer one input is passed
to different output lines but on the other hand decoder decodes a n input code. Higher
order decoder circuits can be easily implemented using the smaller decoder circuits.
3.9 Encoders
Encoders does the opposite function that a decoder does. An encoder receives 2n
input signals and converted them into a code of n output lines. Encoders are very
useful in sending coded messages in the field of communication. The encoders can
46 3 Basic Combinational Circuits
y0 y1
be available as 4-2 encoder, 8-3 encoder or 16-4 encoder. The truth table for a 4-2
encoder is shown in Table 3.6.
The Boolean expressions for each output signal are
y0 = b + d (3.16)
y1 = c + d (3.17)
The circuit diagram for 4-2 encoder is shown in Fig. 3.9. Encoders are used to reduce
the number of bits to represent the input information. Also encoder reduces the
number of bits to store the coded information.
Corresponding logic diagram of this majority voter circuit is shown in Fig. 3.10.
3.11 Data Conversion Between Binary and Gray Code 47
b y
c
d
In Gray code, only one bit changes in the code while going from one state to another.
In Gray code no weight is assigned to a bit position and thus Gray code is an un-
weighted code. Gray code finds application where low switching rate is required.
The comparison of Gray code compared to the decimal numbers and binary code is
given in Table 3.7.
It may be noted that the Gray code can be regarded as reflected code. It can be
clear by seeing the first 3-bit positions below and above the horizontal line. The
conversion between binary and Gray code is very important to interface two kind of
systems. The Boolean expression to convert a binary code to Gray code is
a2 g2
a3 g3
g2 a2
g3 a3
an , if i = n
gi = (3.19)
ai+1 ⊕ ai , if i = 0, 1, 2, ..., (n − 1)
The architecture for converting a 4-bit binary code to a 4-bit Gray code is shown
Fig. 3.11. Similarly the Boolean expressions for converting the Gray code to equiv-
alent binary code can be generated using K-map. The expressions are
gn , if i = n
ai = (3.20)
ai+1 ⊕ gi , if i = 0, 1, 2, ..., (n − 1)
The corresponding scheme for converting a 4-bit Gray code to binary code is given
in Fig. 3.12.
The binary number system versus the Binary Coded Decimal (BCD) code is shown
in Table 3.8. In BCD code, 10 digits are used to represent decimal numbers. These
digits vary from 0 to 9. Equivalently binary representation of these numbers can be
found. Thus up to digit 9 in decimal, both BCD and binary number system are the
3.12 Conversion Between Binary and BCD Code 49
same. Often BCD codes are used to display the results of a digital system on LCD
display. In such case, conversion between binary and BCD codes is very important.
These conversion techniques are described below
After observing Table 3.8, it can be said that after digit 9, 6 is added to the decimal
number to get the digit in the BCD code. For example, 10 in the decimal system is
represented as 16 in the BCD code. Here, we will discuss well known double-dabble
[1] algorithm for binary to BCD conversion. This algorithm is also known as shift
and Add 3. A binary number is left shifted and if the shifted part under weightage of
One’s, Ten’s or Hundred’s is equal to or greater then ‘1012 ’ then 3 is added. Then
again apply a left shift and repeats the steps. This way the number is converted to
the BCD code. An example of this conversion technique is shown in Table 3.9.
The architecture for binary to BCD conversion is shown in Fig. 3.13. The Add-3
block adds 3 to a 4-bit number whenever the number is equal to or greater than 5.
The structure of this ADD-3 block is shown in Fig. 3.14. There are seven Add-3
blocks used in Fig. 3.13. A selection logic is there to select the number after addition
with 3.
50 3 Basic Combinational Circuits
Add-3
Add-3
Add-3 Add-3
Add-3 Add-3
y 9 y 8 y 7 y 6 y 5 y4 y 3 y2 y 1 y 0
x2
x0
x1
x3
3.12 Conversion Between Binary and BCD Code 51
The conversion of BCD codes to Binary numbers is also important in digital circuits.
A BCD number x = 123, can be expressed as x = 123 = 1 × 100 + 2 × 10 + 3 ×
1. The conversion from a BCD number to binary number follows this philosophy. For
example, in the number BCD number x = 0001_1001 first four bits from the LSB
have the weightage of one and the next four bits have the weightage of Ten (1010).
Thus this number can be converted to binary as y = 1001 + (0001 × 1010) =
0001_0011. Another example of BCD to binary conversion is shown in Fig. 3.15.
Here, BCD number 63 (0110_0011) is converted to binary (00111111).
The BCD to Binary conversion circuit is shown in Fig. 3.16. The LSB is equal
for Binary and BCD codes. Here, two 4-bit adders are used. In the first adder, two
bits under the weightage of TEN are dissolved and in the second adder next two bits
x7 x6 x5 x4 x3 x2 x1 x0
b 3 b 2 b 1 b 0 a3 a 2 a 1 a 0
b3 b2 b1 b0 a3 a 2 a 1 a0
s3 s2 s1 s0
y6 y5 y 4 y3 y2 y 1 y0
are considered. Through the b input of the adder, multiple of weightages are added.
Condition for inputs to the b input is based on Table 3.10. Here for the first adder
b3 = b1 = x5 and b2 = b0 = x4 . Similarly, the conditions for the second 4-bit adder
are derived.
Parity check and generation is an error detection technique in the digital transmission
of bits. A parity bit is added to the data to make the number of ones either even or
odd. In even parity bit scheme, the parity bit is ‘0’ for even number of ones present
in the data and the parity bit is ‘1’ for odd number of ones in the data. If there are
even number of ones in the data then the parity bit becomes ‘1’ in odd parity bit
scheme and similarly the parity bit becomes ‘0’ for odd number of ones. Parity bit
is generally added in the MSB.
Realization of both types of parity for 4-bit data is described in Figs. 3.17 and
3.18. If evn_ parit y is ‘1’ then there are odd number of ones in the data. Three XOR
gates are used to calculate evn_ parit y. odd_ parit y is just invert of evn_ parit y
but a separate architecture is shown in Fig. 3.18. This is a balanced architecture and
has some advantages over the structure shown in Fig. 3.17. This balanced structure
will be discussed in detail in Chap. 15.
3.14 Comparators
eq = (a ⊕ b) (3.21)
lt = a.b (3.22)
gt = a.b (3.23)
The architecture for 1-bit comparator is shown in Fig. 3.19. This is the optimized
block diagram of the 1-bit comparator where the XNOR gate is replaced with the
a=b
a<b
b
54 3 Basic Combinational Circuits
<
a[3]
=
b[3] COMP1
>
<
a[2] lt2
= lt1
b[2] COMP1
>
eq1 eq2
<
a[1]
=
b[1] COMP1
> gt1 gt2
<
a[0]
=
b[0] COMP1
>
NOR gate. Now we can proceed to design of higher order comparators using the
1-bit comparators. Lets discuss design of a 4-bit comparator. Four 1-bit comparators
are required to design a 4-bit comparator. If all the bits are equal then only operands
a and b are equal. Thus to check equality, equal (=) outputs from all the comparators
are checked and ANDed. The output signals less than (<) or greater than (>) are
checked by comparing the operands a and b from the MSB side. If a3 > b3 then it
can be said that a > b. If a3 = b3 then the next bit is checked. If a3 = b3 and a2 > b2
then also a > b. This way the next bit is checked until LSB is faced. Similarly, the
less than operation is carried out.
The architecture of a 4-bit comparator is shown in Fig. 3.20. Here, four 1-bit
comparators are used. The comparator has three outputs lt2, eq2 and gt2. Three
more inputs are included to this block to support the hierarchical design. These
3.15 Constant Multipliers 55
inputs are lt1, eq1 and gt1. The 4-bit comparator block receives these inputs from
another block.
An architecture of a 16-bit comparator using 4-bit comparators is shown in
Fig. 3.21. Here, four 4-bit comparators are used. The 16-bit word length is parti-
tioned in four nibbles. Each nibble has four bits. The nibble from the MSB side is
compared first and then the next nibble is compared. The initial inputs for the 4-bit
comparator that receives the lower 4 bits are set as lt1 = 0, eq1 = 1 and gt1 = 0.
The input eq1 is facing an AND gate and thus it must be set to logic 1.
Here, data width of 18-bit is considered and 9-bit is taken to represent the fractional
part. Here, when a operand a is multiplied by 2−i , then the operand is right shifted by
ith bit. This shifting operation is realized by wired shifting technique which does not
consume any logic element. This shifting operation is realized by directly connecting
the wires thus very fast. A schematic for hardware wire shifting for 1-bit right and
1-bit left is shown in Fig. 3.22.
56 3 Basic Combinational Circuits
Overflow
a[3] b[3] a[3] b[3]
a[2] b[2] a[2] b[2]
a[1] b[1] a[1] b[1]
a[0] b[0] a[0] b[0]
Remainder GND
(a) Schematic for 1-bit hardware right shift (b) Schematic for 1-bit hardware left shift
for 4-bit data width. for 4-bit data width.
Fig. 3.22 Schematic for wired shifting by 1-bit right and left for 4-bit data width
This wired shifting methodology supports both signed and unsigned numbers
represented in two’s complement representation. For example, for 0.5/2 the result
should be 0.25 and for −0.5/2 the result should be −0.25. This is why the MSB bit
is kept same for input operand and the output. The wired shift block for 1-bit right
shift is called here as RSH1 and for 1-bit left shift block is called here as LSH1.
The constant multiplication block that multiplies the input operand a by 0.3333
is shown in Fig. 3.23. Here, four right shift blocks are designed which are RSH2,
RSH4, RSH6 and RSH8. The input operand is shifted and added to obtain the final
result according to Eq. (3.24). Here, only three adders are used thus it can be said
that this constant multiplier block is hardware efficient than a complete multiplier.
The shift blocks shown in Fig. 3.23 are for fixed number of bits. But in many
applications variable shifting operations are required. These variable shift blocks
shift an input operand by a variable count. Before discussing the variable shift blocks,
first consider a block that shifts an input operand by 1-bit in the right side depending
on a control signal. If the control bit is high then the input operand is right shifted
by 1-bit otherwise the block passes the same input to the output. This type of block
is called Controlled RSH (CRSH) block for the right shift and called as Controlled
Adder Adder
Adder
a/3
3.16 Frequently Asked Questions 57
a[2] 0
b[2]
a[3] 1
a[1] 0
b[1]
a[2] 1
a[0] 0
b[0]
a[1] 1
LSH (CLSH) for left shift. Here, CRSH1 block is shown in Fig. 3.24 which shifts
the input operand by 1-bit if s is equal to 1. The CRSH block has a delay of a MUX
in the path.
Now, the variable shift blocks can be designed using the controlled shift blocks.
The variable shift block for right shift is called as Variable RSH (VRSH) and the
variable shift block for left shift is called as Variable LSH (VLSH). A diagram for
VRSH block is shown in Fig. 3.25. Here, this block is configured using CRSH1,
CRSH2, CRSH4 and CRSH8 blocks. This block is capable of shifting an operand
by any number from 0 to 15. Shifting by s = 0 means all the controlled shift blocks
are disabled and they pass the input data as it is. Shifting by s = 15 is achieved by
enabling all the blocks. The VRSH or VLSH block has a delay of a maximum of 4
MUXes connected in series. Thus has a speed limitation.
a a
f b f b
g g
e c e c
d d
(a) Seven Segments of a digit. (b) Number 2 is represented.
begin
i f ( A1>B1 )
begin
LT1 <= 0 ; GT1 <= 1 ; EQ1 <= 0 ;
end
else i f ( A1<B1 )
begin
LT1 <= 1 ; GT1 <= 0 ; EQ1 <= 0 ;
end
else
begin
LT1 <= 0 ; GT1 <= 0 ; EQ1 <= 1 ;
end
end
endmodule
Q2. Write Verilog Code to display BCD numbers on Seven Segment display?
A2. In order to display the BCD numbers, Seven Segment Display (SSD) is used.
SSD is an inbuilt feature of many FPGA kits. Seven segments together display any
number in the SSD. These seven segments can be used to display 27 = 128 number
of combinations but few combinations are used. Representation of a BCD number
using seven segments is shown in Fig. 3.26.
module segment7 (BCD,SEG) ;
input [ 3 : 0 ] BCD;
output reg [ 6 : 0 ] SEG;
always @(BCD)
begin
case (BCD)
0 : SEG = 7 ’ b1111110 ;
1 : SEG = 7 ’ b0110000 ;
2 : SEG = 7 ’ b1101101 ;
3 : SEG = 7 ’ b1111001 ;
4 : SEG = 7 ’ b0110011 ;
3.16 Frequently Asked Questions 59
1 I0 y I0
out = x̄ out = x + y
0 I1 I1
x x
(a) NOT using MUX. (b) OR using MUX.
I0 y I0
out = x.y out = x ⊕ y
y I1 ȳ I1
x x
(c) AND using MUX. (d) XOR using MUX.
5 : SEG = 7 ’ b1011011 ;
6 : SEG = 7 ’ b1011111 ;
7 : SEG = 7 ’ b1110000 ;
8 : SEG = 7 ’ b1111111 ;
9 : SEG = 7 ’ b1111011 ;
default : SEG = 7 ’ b0000000 ;
endcase
end
endmodule
Required time is the time within which data is required to arrive at internal node
of flip-flop. Required time is constrained by the designers. The time in which data
arrives at the internal node is the arrival time. It incorporates all the net and logic
delays in between the reference input point and the destination node.
The different kinds of paths when checking the timing of a design are as follows.
1. Flip-flop to flip-flop timing path.
2. External input device to on-chip flip-flop timing path.
3. On chip flip-flop to external output devices.
4. External input device to external output device through on-chip combinational
block.
There may be two types of paths which can cause timing violation, one is max
path and the second one is min path. Two types of timing checks are needed to be
performed one is hold check for min path and setup check for max path. These two
checks are explained below.
In a setup timing check, the timing relationship between the clock and the data pin
of a flip-flop is checked to observe that the setup timing requirement is met or not.
The setup check ensures that the data is stable for a certain amount of time, which
is the setup time of the flip-flop, before the active edge. This is done to ensure that
the capture flip-flop correctly captures the data. Figure 13.11 illustrates the setup
condition. The condition for setup check can be written as
Tcycle
CLK
Tlaunch
Launch Edge
FF0/CLK
Tcapture
Capture Edge
Tsu
FF1/CLK
FF0/Q
Fig. 13.11 The concept of setup timing check for a sequential circuit
A hold timing check is performed to ensure that the hold specification of flip-flop is
met. According to the hold specification of a flip-flop, the data should be stable for
a specified amount of time after the active edge of the clock. The Fig. 13.12 shows
the concept of hold timing check.
The different types of paths are already discussed above. Now, setup and hold check-
ups are to be performed for each type of path. In this section, the timing checks are
performed for different types of paths.
13.4 Timing Checks for Different Timing Paths 255
Tcycle
CLK
Tlaunch
Launch Edge
FF0/CLK
Tcapture
Capture Edge
Th
FF1/CLK
FF0/Q
Fig. 13.12 The concept of Hold timing check for a sequential circuit
In a flip-flop to flip-flop timing path, both the flip-flops can be connected to same or
different clock signal. Figure 13.6 demonstrates this timing path. Here, the data is
launched by the launch flip-flop and reaches another flip-flop through a combinational
circuit.
In this case the required time and the arrival time are defined as
Lets consider an example to understand the setup slack calculation for flip-flop timing
path.
Example 13.1 The launching active clock edge occurred at 4.30 ns with Tlaunch =
0.20 ns. The clock signal has the period of 10 ns and Tcaptur e = 0.50 ns. The total
data path delay is 6.50 ns and the setup constant can be considered as Tsu = 0.46 ns.
6.4 Sequence Detector Using Moore Machine 107
1/0 1/0
In this section, we will discuss implementation of the same 1010 sequence detector
but using Moore machine. In Moore type FSMs, we also can have two types of
sequence detectors which are non-overlapping and overlapping. The variation of the
output signal y in both the cases is shown in Fig. 6.9. It can be observed that the
output is one clock cycle delayed compared to the output of Mealy machine.
The state diagram for 1010 sequence detector using Moore machine in non-
overlapping style is shown in Fig. 6.10. Here inside the circle S0 /0 represents that in
the PS of S0 output is zero. The input signal (x) is written along the branches from
one state to another. The Moore machine needs extra states compared to the Mealy
machine and in these extra states (S4 ) output is 1. Here one extra state is used and
in this extra state, output is always 1. Thus the objective is to reach the output state
from any state. Considering the same input sequence as ‘011010’ then the sequence
of next states will be S0 S1 S1 S2 S3 S4 .
The 1010 sequence detector using Moore machine is also designed here using
K-map. The state table is shown in Table 6.4. The state table is formed according to
the state diagram shown in Fig. 6.10. The next step is to assign the steps. Here, four
states are used and thus minimum of three bits are needed to represent them. The
state assignment table is shown in Table 6.5.
The excitation table for 1010 sequence detector using the Moore machine in non-
overlapping style is shown in Table 6.6. Here, same D flip-flip is used. In comparison
to the Mealy machine, three flip-flops are required. Thus using the K-map optimiza-
tion, the Boolean expression for the input of the flip-flops and the output is derived.
clk
y Non-Overlapping
y Overlapping
Fig. 6.9 ‘1010’ sequence detector output for overlapping and non-overlapping case for Moore
machine
108 6 Finite State Machines
1 0
1
0
1 0 1 0
S0 /0 S1 /0 S2 /0 S3 /0 S4 /1
0 1
Fig. 6.10 State diagram for 1010 sequence detector using Moore machine in non-overlapping style
Table 6.5 State table with state assignments for sequence ‘1010’ detector using Moore Machine
in non-overlapping style
Present state Next state Next state
X=0 X=1 X=0 X=1
000 000 001 0 0
001 010 001 0 0
010 000 011 0 0
011 100 001 0 0
100 000 001 1 1
Table 6.6 Excitation table for 1010 sequence detector using Moore machine
Present state Input Next state F/F inputs Output
q2 q1 q0 X q2∗ q1∗ q0∗ d2 d1 d0 y
0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0 0 1 0
0 0 1 0 0 1 0 0 1 0 0
0 0 1 1 0 0 1 0 0 1 0
0 1 0 0 0 0 0 0 0 0 0
0 1 0 1 0 1 1 0 1 1 0
0 1 1 0 1 0 0 1 0 0 0
0 1 1 1 0 0 1 0 0 1 0
1 0 0 0 0 0 0 0 0 0 1
1 0 0 1 0 0 1 0 0 1 1
6.4 Sequence Detector Using Moore Machine 109
q 2 q1 q 0 x q 2 q1 q 0 x
00 01 11 10 00 01 11 10
00 0 1 1 0 00 0 0 0 1
01 0 1 1 0 01 0 1 0 0
11 X X X X 11 X X X X
10 0 1 X X 10 0 0 X X
Fig. 6.11 K-Map for 1010 sequence detector using Moore machine
The K-map optimization is shown in Fig. 6.11. The don’t care conditions may be
used to generate optimized expressions but here these conditions are not considered.
The Boolean expressions without optimization are shown below.
The hardware realization of the 1010 sequence detector using Moore machine in
non-overlapping style is shown in Fig. 6.12. Here, three D flip-flops are used. The
output of this sequence detector is a function of only the present states. Thus this
sequence detector is strictly a More machine.
The above discussion was for sequence detector using Moore machine in non-
overlapping style. Similarly we can convert this detector to detect overlapping
sequences. The nature of the output was previously shown in Fig. 6.9. The state
diagram for overlapping style is shown in Fig. 6.13. In the PS of S4 if the input is 1,
then state transition occurs from S4 to S3 to search the nearest state for 0.
110 6 Finite State Machines
d2 q2
dff2
d1 q1
dff1
d0 q0
dff0
clk
output (y)
Fig. 6.12 Hardware implementation of 1010 sequence detector using Moore machine in non-
overlapping style
0
1
0
1 0 1 0
S0 /0 S1 /0 S2 /0 S3 /0 S4 /1
1
0 1
Fig. 6.13 State diagram of 1010 sequence detector using Moore machine in overlapping style
6.7 FSM-Based Vending Machine Design 113
01
S3 /1 00
S2 /1
10 11
Table 6.9 State table for serial adder using Moore machine
Present state Next state Output (sum)
ab = 00 ab = 01 ab = 10 ab = 11
S0 S0 S3 S3 S1 0
S1 S3 S1 S1 S2 0
S2 S3 S1 S1 S2 1
S3 S0 S3 S3 S1 1
cout
DFF
reset
clk
Vending Machine is a practical example where FSM can be used. The ticket dis-
patcher unit at the stations, and the can drinks dispatcher at the shops are some
examples of Vending machines. Here in this chapter we will try to understand a sim-
ple Vending machine which dispatches a can of coke after deposition of 15 rupees.
The machine has only one hole to receive coins that means customers can deposit
one coin at a time. Also the machine receives only 10 (T) or 5 (F) rupee coins and it
doesn’t give any change. So the input signal x can take values like
1. x = 00, no coin deposited.
2. x = 01, 5 rupee coin (F) deposited.
3. x = 10, 10 rupee coin (T) deposited.
4. x = 11 (forbidden) Both coins can’t be deposited at the same time.
114 6 Finite State Machines
10/0 S0
01/0
00/0
S1
00/0 01/0
01/0 00/0
10/0 10/0
S2
01/1
00/0
10/1
S3