0% found this document useful (0 votes)
5 views

mereg

all units for r21 jntua Mtech ADSD

Uploaded by

Sravan Kumar413
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

mereg

all units for r21 jntua Mtech ADSD

Uploaded by

Sravan Kumar413
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Chapter 1

Binary Number System

1.1 Introduction

Representation of numbers is very important in digital systems for efficient perfor-


mance. Binary number system (BNS) is a common way to represent any number
in digital systems. In this conventional system, number representation should be
valid for both positive and negative numbers. Also, representation technique should
produce maximum accuracy in representing a real number. In addition to this con-
ventional BNS, some computers adopted unconventional number systems to achieve
faster speed for performing addition, multiplication or division. These unconven-
tional number systems may have higher base compared to base 2 of binary system.
This chapter will discuss basics of BNS and its representation techniques.

1.2 Binary Number System

Integers are represented using BNS in the digital systems implemented on computers,
micro-controllers or on FPGAs. Any number is represented by two symbols which
are ‘0’ and ‘1’ in BNS. A number X of length n is represented in BNS as

X = {xn−1 , xn−2 , ..., x1 , x0 } (1.1)

Here, each digit or bit (xi ) takes value from the set {0, 1} and an integer is represented
using n-bits. The value of n is important in correct representation of an integer. This
decides the accuracy of a digital system. The value of the integer can be evaluated
from the binary representation as


n−1
X = xn−1 2 n−1
+ xn−2 2 n−2
+ .... + x1 2 + x0 2 =
1 0
x i 2i (1.2)
i=0

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 1


S. Roy, Advanced Digital System Design,
https://doi.org/10.1007/978-3-031-41085-7_1
2 1 Binary Number System

1.3 Representation of Numbers

In the implementation of a digital system, an integer can be positive or negative. The


representation of integers should be done in a way that it can represent both negative
and positive numbers. There are three major ways to represent the integers which are
1. Signed Magnitude Representation.
2. One’s Complement Representation.
3. Two’s Complement Representation.

1.3.1 Signed Magnitude Representation

In the signed magnitude representation, sign and magnitude of a number are repre-
sented separately. Sign is represented by a sign bit. For an n-bit binary number, 1-bit
is reserved for sign and (n − 1) bits are reserved for magnitude. In general BNS,
MSB is used for sign bit and logic 1 on this bit represents the negative numbers. This
format is shown in Fig. 1.1. The maximum number that can be represented in this
number system is
X max = 2n−1 − 1 (1.3)

This number can be both negative and positive depending upon the MSB bit. If n = 8
then X max = 127. In this representation, zero does not have unique representation.
Signed magnitude representation has a symmetric range of numbers. This means
that every positive number has its negative counterpart. The integer X = −9 can be
represented in signed magnitude representation as X = 10001001. Here, 7 bits are
used to represent the integer and the MSB is 1 as the integer is negative.
Example
Addition of two numbers X 1 = −9 and X 2 = 8 can be done in the following way:

1 0 0 0 1 0 0 1 X1
– 0 0 0 0 1 0 0 0 X2
0 0 0 0 0 0 0 1 Y

In the above example, X 1 is a negative number and X 2 is a positive number. Thus


the X 2 is subtracted from X 1 that results Y . In signed magnitude representation, sign

MSB (n-1)-bits
(Sign) (Magnitude)

Fig. 1.1 Signed magnitude representation


1.3 Representation of Numbers 3

Table 1.1 Addition of two signed magnitude numbers


MSB (X 1) MSB (X 2) Operation MSB (Y )
X1 < X2 X1 > X2
0 0 Addition 0
0 1 Subtraction 1 0
1 0 Subtraction 0 1
1 1 Addition 1

bits of the operands decide the operation to be performed on the operands. Table 1.1
shows the addition/subtraction operation depending on the MSB bit of the operands.
If the sign of the two operands are same then the sign of the output is also same.
In other case, comparison of the two operands is required.

1.3.2 One’s Complement Representation

In one’s complement representation, positive numbers are represented in the same


way as they are represented in the signed magnitude representation. The negative
numbers are represented by performing the bit-wise inversion on the number as
shown below:
1. Obtain binary value of the magnitude of the number. For example, X = 9 binary
of the magnitude is 9 = 01001.
2. If the number is negative then invert the number bit wise. For example, −9 in
one’s complement representation is X = 10110.
3. If the number is positive then the binary value is the one’s complement represen-
tation of that number.
The range of the one’s complement representation is

− 2n−1 + 1 ≥ X ≥ 2n−1 − 1 (1.4)

In this representation, zero does not have unique representation as in case of signed
magnitude representation. This representation also has the symmetric range. The
MSB differentiates the positive and negative numbers. The range of one’s comple-
ment representation for n = 8 is −127 ≥ X ≥ 127. When adding a positive number
X and a negative number −Y represented in one’s complement. The result is

X + (2n − ulp) − Y = (2n − ulp) + (X − Y ) (1.5)

where ulp = 20 = 1. This can be explained below with the value of n = 8

X + (256 − 1) − Y = 255 + (X − Y ) (1.6)


4 1 Binary Number System

Example
Addition of two numbers X 1 = −9 and X 2 = 8 represented in one’s complement
can be done in the following way:

1 1 1 1 0 1 1 0 X1
+ 0 0 0 0 1 0 0 0 X2
1 1 1 1 1 1 1 0 Y

One’s complement equivalent of the output is −1. In one’s complement number


system, a carry out is indication of a correction step. This is shown in the following
example:

1 1 1 1 0 1 1 0 X1
+ 0 0 0 0 1 0 1 0 X2
cout = 1 0 0 0 0 0 0 0 0 Y

Here, the result must be 1 but we get 0 instead. The correction is done by adding the
cout with the result.

1.3.3 Two’s Complement Representation

Two’s complement representation is very popular in implementing practical digital


systems. Two’s complement representation of a number can be obtained by first
taking the one’s complement representation and then by adding ulp. The steps for
obtaining two’s complement representation are
1. Obtain binary value of the magnitude of the number. For X = −9 binary of the
magnitude is 9 = 01001.
2. If the number is negative then take one’s complement and then add ulp to it. After
complementing X = 10110 and the final value is 10110 + 00001 = 10111 = X .
3. If the number is positive then the binary value is the two’s complement represen-
tation of that number.
In two’s complement representation, positive and the negative numbers are differ-
entiated by the status of the MSB bit. If the MSB is 1 then the number is treated as
negative. Here the zero has unique representation. The range of numbers in two’s
complement number system is

− 2n−1 ≥ X ≥ 2n−1 − 1 (1.7)

The range is asymmetric as the number −2n−1 (100...000) does not have the positive
counterpart. If a complement operation is attempted on 2n−1 (100...000) then the
1.3 Representation of Numbers 5

result will be same. Thus in a design with fixed word length, this value is ignored and
symmetric range is used. The usable range of two’s complement representation for
n = 8 is same as that of one’s complement representation. The subtraction operation
in two’s complement representation can be expressed as

X + 2n − Y = 2n + (X − Y ) (1.8)

Example
Addition of two numbers X 1 = −9 and X 2 = 8 represented in two’s complement
can be done in the following way:

1 1 1 1 0 1 1 1 X1
+ 0 0 0 0 1 0 0 0 X2
1 1 1 1 1 1 1 1 Y

Here two’s complement equivalent of the Y is −1. In two’s complement number


system all the digits participate in the addition or subtraction process. In addition
process, there may be chance of generating carry out and the overflow. If the sign
of the operands are opposite, then overflow will not occur but carry out can occur.
If the result is positive then the carry out occurs. Another example where carry is
generated is shown below:

1 1 1 1 0 1 1 1 X1
+ 0 0 0 0 1 0 1 0 X2
cout = 1 0 0 0 0 0 0 0 1 Y

In this example, result is +1 and carry out (cout ) is generated. The addition and
subtraction process should be done in such a way that the overflow never occur
otherwise wrong result will be produced. Consider the following example:

1 1 0 0 1 −7
+ 1 0 1 1 0 −10
0 1 1 1 1 15

The actual result is −17 but here 15 is produced. Thus it should be taken care that
the final result should not exceed the maximum number that can be represented in
two’s complement representation.
6 1 Binary Number System

1.4 Binary Representation of Real Numbers

In the above section, how to represent positive or negative numbers is discussed


considering that the integers do not have the fractional part. But practical integers
can be fractional also. The digital platforms use a specific data format to represent
such fractional integers. There are two types of data formats which are used in any
digital design and these are
1. Fixed Point Data Format.
2. Floating Point Data Format.

1.4.1 Fixed Point Data Format

In the fixed point data format base architectures, the data width to represent the
numbers is fixed. Thus the number of bits that are reserved to represent fractional
part and integer part are also fixed. The fixed point data format is shown in Fig. 1.2.
The decimal equivalent of a binary number in this format is computed as

X = xm−1 2m−1 + ... + x1 21 + x0 20 · x−1 2−1 + x−2 2−2 + ... + x−(n−m) 2−(n−m)

m−1 (n−m)

= x i 2i + x−i 2−i (1.9)
i=0 i=1

Here, m-bits are reserved to represent the integer part and (n − m)-bits are reserved
for fractional part. For example, if the data length is 16 bit and value of m is 6 then 6
bits are reserved for the integer part and rest of the bits are reserved for the fractional
part.
Majority of practical digital systems implemented on any digital platform use this
data format to represent fractional numbers. Now, to represent signed or unsigned
numbers, any of the above techniques (signed magnitude representation, one’s com-
plement representation, and two’s complement representation) can be used. But
mostly the two’s complement representation and signed magnitude representation
are used. In the integer field, (m − 1)-bits are usable as the sign of the number is
represented using the MSB bit. An example to represent a fractional number in fixed
point data format using two’s complement representation is shown below:

m-bits (n-m)-bits
(Integer Part) (Fractional Part)

Fig. 1.2 Fixed point representation of real numbers


1.5 Floating Point Data Format 7

1. Let the number be X = −9.875, n = 16 and m = 6.


2. Binary representation of the integer part using 6 bits is 9 = 001001.
3. Binary representation of the fractional part using 10 bits is 0.875 = 2−1 + 2−2 +
2−3 = 1110000000.
4. The magnitude of the number in fixed point format is 001001_1110000000.
5. Perform two’s complement as the number is negative. Thus X = 1_10110_
0010000000.
If 6 bits are reserved for integer part for n = 16, then maximum value of positive
integer that can be represented is 2m−1 − 1 = 31. Maximum representable negative
number is 2m−1 = 32. But to avoid confusion, the range of negative and positive
numbers must be kept same. The maximum representable real number with this
format is
011111_1111111111 = 31.999023438 (1.10)

Any number beyond this maximum number cannot be represented with n = 16 and
m = 6. If the value of m decreases then the resolution of the number increases and
range of representable numbers reduces. This means the gap between two consec-
utive numbers decreases. But if the value of m increases then range increases but
resolution decreases. Designers have to carefully select the value of n and m. All
the architectures discussed in the book are based on the fixed point data format as it
provides an easy means to represent the fractional numbers. But this representation
has some limitations due to its lower range.

1.5 Floating Point Data Format

Floating point data format is another technique to represent the fractional numbers.
Floating point data format increases the range of the numbers that can be repre-
sented. Many dedicated processors or micro-controllers use this format to represent
the fractional numbers. Floating point data format covers a wide range of numbers
to achieve better accuracy compared to the accuracy achieved in case of fixed point
representation. The concept of floating point data format comes from the representa-
tion of real fractional numbers. For example, the fractional number −9.875 can also
be represented as
− 9.875 = −1 × 9875 × 10−3 (1.11)

Other representations are also possible. Thus floating point representation is not
unique. The general format of the floating point representation is

X = S.M.r Eb (1.12)

So, a floating point number has three fields, viz., Sign (S), Mantissa (M) and Exponent
(E). Here, r represents the radix and its value is 10 for decimal numbers. Similarly,
8 1 Binary Number System

1-bit 4-bit 11-bits


(Sign Bit) (Biased Exponent) (Mantissa)

Fig. 1.3 Floating point representation of real numbers

the binary numbers also can be represented in this format where r = 2. Sign of a
number is identified by a single digit. If the Sign bit is 1 then the number is negative
else it is a positive number. In the mantissa part all the digits of the number are
present. Number of bits reserved for mantissa part defines the accuracy.
The floating point data format according to IEEE 754 standard is shown in Fig. 1.3
for 16-bit word length. Here, 11 bits are reserved for mantissa part and 4 bits are
reserved for exponent field. Mantissa part is represented as unsigned version. In
the exponent field, unique representation must be adopted to differentiate positive
and negative exponents. If the two’s complement representation is used, then the
negative exponent will be greater than the positive exponent. Thus a bias is added to
the exponent to generate biased exponent (E b ). For example, if 4 bits are allocated
for exponent field then the bias value is 7 (23 − 1). In general, for p bits the bias
is 2( p−1) − 1. The value of E b is obtained as E b = bias + E. In this way, the value
of E = 0 is represented as E b = bias. Exponent value of E = −1 is represented as
E b = 6 and the exponent value of E = 1 is represented as E b = 8. In this way the
negative and positive exponents are distinguished.
Example
Represent the fractional number X = −9.875 in floating point data format for 16-bit
word length.
1. Represent the magnitude of the fractional number in binary. abs(x) = 9.875 =
1001_111.
2. First decide the sign bit. The sign bit is 1 as X is negative.
3. In the mantissa part 11 bits are reserved. The first step to represent the mantissa part
is to normalize the binary representation. After normalization and adding zeros
the result is 1_00111100000. The normalization is done to restrict the mantissa
part between 1 and 2. Here, the number is shifted 3 bits in the left side. The MSB
may not be included in the final version of the mantissa. The MSB is called as
hidden bit and this bit is ignored in the mantissa representation according to the
IEEE 754 standard.
4. As the mantissa part is 3 bits left shifted then the value of exponent is E = +3
and the value of the biased exponent is E b = 1010 = 10102 .
5. The floating point representation is 1_1010_00111100000.
A detailed discussion on the floating point numbers and floating point architectures
is given in Chap. 12.
Chapter 3
Basic Combinational Circuits

3.1 Introduction

In a combinational circuit, the output is a pure function of inputs only whereas in


sequential circuits output is not only a function of the present inputs but also a
function of previous output status. This means combinational circuits do not have
memory. Combinational circuits implement a particular Boolean expression and are
also known as time-independent circuits.
Complex digital systems cannot be a pure combinational circuit. Both sequential
and combinational circuits are required for a digital system implementation. Exam-
ples of combinational circuits are Adder, Subtractor, Multiplexer, De-Multiplexer,
Encoder, Decoder, etc. In this chapter, various combinational circuits will be dis-
cussed. This will be a very brief discussion as there are many books and online
tutorials available on this topic.

3.2 Addition

Addition operation is the most important basic arithmetic operation which is mostly
used in implementing the digital systems. A two-input adder circuit receives input
operands a and b and generates two outputs s and cout . Here, s is the summation
output, and cout is the carry out representing that the overflow is occurred. The truth
table for two-input addition is shown in Table 3.1. The two-input addition circuit is
commonly known as Half Adder (HA). The logical expression for the HA derived
from the truth table is

s =a⊕b (3.1)
cout = a.b (3.2)

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 39


S. Roy, Advanced Digital System Design,
https://doi.org/10.1007/978-3-031-41085-7_3
40 3 Basic Combinational Circuits

Table 3.1 Characteristic table of a HA


a b s cout
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1

a
Fig. 3.1 Gate level logic
diagram for the HA s
b
cout

Table 3.2 Characteristic table of a FA


a b cin s cout
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

The circuit diagram for the HA is shown in Fig. 3.1. A Full Adder (FA) is an arithmetic
circuit that receives three inputs and generates two outputs. In addition operation of
two-input operands, the HA circuit do not consider the carry input. But in the FA
circuit, a third input cin is considered and thus FA is called the complete adder. The
truth table of the FA is shown in Table 3.2. The logical expressions for the FA derived
from the truth table are shown below

s = a ⊕ b ⊕ cin (3.3)
cout = a.b + a.cin + b.cin (3.4)

The gate level logic diagram for the FA is shown in the Fig. 3.2. Here the FA is
implemented using two HA circuits.
3.3 Subtraction 41

Fig. 3.2 Gate level cin s


realization of FA using HA a HA2
cout
b HA1

3.3 Subtraction

Subtraction operation is also very important operation in implementing digital sys-


tems like addition operation. A two-input subtraction circuit receives input operands
(a and b) and generates two outputs (d and bout ). Here, d is the difference output and
bout is the borrow out representing that the higher number is subtracted from lower
number. The truth table for two-input subtraction is shown in Table 3.3. The two-
input subtraction circuit is commonly known as Half Subtractor (HS). The logical
expression for the HS derived from the truth table is

d =a⊕b (3.5)
bout = ā.b (3.6)

The circuit diagram for the HS is shown in Fig. 3.3. A Full Substractor (FS) is
an arithmetic circuit that receives three inputs and generates two outputs. In the
subtraction operation of two-input operands, the HS circuit does not consider the
borrow in (bin ) input. But in the FS circuit, a third input bin is considered and thus
FS is called as complete subtractor. The truth table of the FS is shown in Table 3.4.
The logical expressions for the FS derived from the truth table are shown below

s = a ⊕ b ⊕ cin (3.7)
cout = ā.bin + ā.b + b.bin (3.8)

The gate level logic diagram for the FS is shown in Fig. 3.4. Here FS is implemented
using two HS circuits.

Table 3.3 Characteristic table of a HS


a b d bout
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
42 3 Basic Combinational Circuits

a
Fig. 3.3 Gate level logic
diagram for the HS d
b
bout

Table 3.4 Characteristic table of a FS


a b bin d bout
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

Fig. 3.4 Gate level bin d


realization of FS using HS a HS2
bout
b HS1

3.4 Parallel Binary Adder

Previously we have discussed the addition operation between two 1-bit operands. The
addition operation between two n-bit operands can be performed using n FA blocks.
Here, a 4-bit adder which adds two 4-bit operands is discussed and it generates a
4-bit sum output and a carry out output. Each bit from the two operands is added
in parallel by 4 FA blocks. The architecture of this parallel binary adder is shown in
Fig. 3.5.
The first FA block receives the initial carry input which can be kept as 0. The first
FA block generates a carry out signal which is passed to the second FA block. The
second FA block then computes its sum and carry out signal. This way carry out
signal propagates to the last FA block. Thus this structure is known as Ripple Carry
Adder (RCA). This block has a delay of n.t F A for n bits where t F A is the delay of
one FA block. More about the fast adders are discussed in Chap. 7.
3.5 Controlled Adder/Subtractor 43

a3 b3 a2 b2 a1 b1 a0 b0 cin

Full Adder Full Adder Full Adder Full Adder

cout s3 c3 s2 c2 s1 c1 s0

Fig. 3.5 Architecture of 4-bit ripple carry adder

3.5 Controlled Adder/Subtractor

Controlled adder/subtractor block is one of the most important combinational circuits


in designing digital systems. In many applications, it is required to perform addition
and subtraction operation by a single block. The controlled adder/subtractor block
performs addition and subtraction operation depending on a control signal.
The architecture for 4-bit adder/subtractor is shown in Fig. 3.6. In two’s comple-
ment representation, two operands a and b are added as a + b. In order to subtract
b from the operand a, the first two’s complement of b is taken and then added to a.
This can be written as a − b = a − one’s complement of b + ulp where ulp = 20 .
The addition operation is performed when the ctrl input is low and the subtraction
operation is performed when the ctrl input is high. The XOR gates are used to take
the one’s complement of b and the ctrl input is connected to the first full adder as
input carry to add ulp.

b3 b2 b1 b0
ctrl
a3 a2 a1 a0

Full Adder Full Adder Full Adder Full Adder

c s3 c3 s2 c2 s1 c1 s0

Fig. 3.6 Architecture of 4-bit adder/subtractor


44 3 Basic Combinational Circuits

3.6 Multiplexers

Multiplexer is a circuit which selects one signal between two or many signals based
on a control signal. A 2:1 Multiplexer circuit selects from two inputs. Multiplexer
circuits have many use in digital circuits for data sharing, address bus sharing, control
signal selection, etc. The Boolean expression for a simple 2:1 Multiplexer circuit is
shown below

y = s.x0 + s.x1 (3.9)

The Multiplexer circuit chooses input x0 when the control signal s is low and selects
input x1 when the control signal is high. The details of Multiplexer circuits are
discussed in Chap. 2.

3.7 De-Multiplexers

A De-Multiplexer sends a input signal to different output ports depending on a control


signal. A De-Multiplexer can do the reverse operation that a multiplexer does. The
basic Boolean expression for 1:2 De-Multiplexer is given below

y0 = s.x (3.10)
y1 = s.x (3.11)

Here, s is the control signal. The input signal x is passed to the output line y0 when
s is logic zero and x is connected to the output line y1 when s is logic one. The
architecture of 1:4 De-Multiplexer using 1:2 De-Multiplexer is shown in Fig. 3.7.

Fig. 3.7 A schematic for a


1:4 DeMUX 1
y3

0
y2
1

0
1
y1

s1
0
y0

s0
3.9 Encoders 45

Table 3.5 Characteristic table of a 2-4 decoder


en s1 s0 y0 y1 y2 y3
1 0 0 1 0 0 0
1 0 1 0 1 0 0
1 1 0 0 0 1 0
1 1 1 0 0 0 1

Fig. 3.8 Circuit diagram of en


a 2-4 decoder s1
s0

y0 y1 y2 y3

3.8 Decoders

A decoder circuit is used to change or decode a code into a set of signals. The decoder
receives n signals and produces 2n output signals. Only one output signal is at logic
level 1 at a time. There may be 2-4 decoder, 3-8 decoder or 4-16 decoder circuit. The
truth table for a 2-4 decoder is shown in Table 3.5.

The Boolean expressions for each output signal are

y0 = s0 .s1 .en (3.12)


y1 = s0 .s1 .en (3.13)
y2 = s0 .s1 .en (3.14)
y3 = s0 .s1 .en (3.15)

The circuit diagram of 2-4 decoder is shown in Fig. 3.8. The circuit diagram is very
similar to that of De-Multiplexer. But in case of De-Multiplexer one input is passed
to different output lines but on the other hand decoder decodes a n input code. Higher
order decoder circuits can be easily implemented using the smaller decoder circuits.

3.9 Encoders

Encoders does the opposite function that a decoder does. An encoder receives 2n
input signals and converted them into a code of n output lines. Encoders are very
useful in sending coded messages in the field of communication. The encoders can
46 3 Basic Combinational Circuits

Table 3.6 Characteristic table of a 4-2 Binary Encoder


a b c d y1 y0
1 0 0 0 0 0
0 1 0 0 0 1
0 0 1 0 1 0
0 0 0 1 1 1

Fig. 3.9 Circuit for a 4-2 a


encoder b
c
d

y0 y1

be available as 4-2 encoder, 8-3 encoder or 16-4 encoder. The truth table for a 4-2
encoder is shown in Table 3.6.
The Boolean expressions for each output signal are

y0 = b + d (3.16)
y1 = c + d (3.17)

The circuit diagram for 4-2 encoder is shown in Fig. 3.9. Encoders are used to reduce
the number of bits to represent the input information. Also encoder reduces the
number of bits to store the coded information.

3.10 Majority Voter Circuit

Majority voter circuit is useful in fault-tolerant computing and in other applications.


Output of a majority voter circuit becomes true when above 50% inputs to it are true.
For example in a four-input majority voter circuit, if more than two inputs are logic
1 then output will become 1. This means majority of inputs are true. Here, a 4-input
majority voter circuit is designed and its Boolean expression is

y = ab(c + d) + cd(a + b) (3.18)

Corresponding logic diagram of this majority voter circuit is shown in Fig. 3.10.
3.11 Data Conversion Between Binary and Gray Code 47

Fig. 3.10 A four-input


majority voter circuit a

b y
c
d

3.11 Data Conversion Between Binary and Gray Code

In Gray code, only one bit changes in the code while going from one state to another.
In Gray code no weight is assigned to a bit position and thus Gray code is an un-
weighted code. Gray code finds application where low switching rate is required.
The comparison of Gray code compared to the decimal numbers and binary code is
given in Table 3.7.
It may be noted that the Gray code can be regarded as reflected code. It can be
clear by seeing the first 3-bit positions below and above the horizontal line. The
conversion between binary and Gray code is very important to interface two kind of
systems. The Boolean expression to convert a binary code to Gray code is

Table 3.7 Binary representation versus Gray representation


Decimal number Binary representation Gray representation
0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110
5 0101 0111
6 0110 0101
7 0111 0100
8 1000 1100
9 1001 1101
10 1010 1111
11 1011 1110
12 1100 1010
13 1101 1011
14 1110 1001
15 1111 1000
48 3 Basic Combinational Circuits

Fig. 3.11 Binary data to


Gray code conversion for 4 a0
bits g0
a1
g1

a2 g2

a3 g3

Fig. 3.12 Gray code to


binary data conversion for 4 g0
bits a0
g1
a1

g2 a2

g3 a3


an , if i = n
gi = (3.19)
ai+1 ⊕ ai , if i = 0, 1, 2, ..., (n − 1)

The architecture for converting a 4-bit binary code to a 4-bit Gray code is shown
Fig. 3.11. Similarly the Boolean expressions for converting the Gray code to equiv-
alent binary code can be generated using K-map. The expressions are

gn , if i = n
ai = (3.20)
ai+1 ⊕ gi , if i = 0, 1, 2, ..., (n − 1)

The corresponding scheme for converting a 4-bit Gray code to binary code is given
in Fig. 3.12.

3.12 Conversion Between Binary and BCD Code

The binary number system versus the Binary Coded Decimal (BCD) code is shown
in Table 3.8. In BCD code, 10 digits are used to represent decimal numbers. These
digits vary from 0 to 9. Equivalently binary representation of these numbers can be
found. Thus up to digit 9 in decimal, both BCD and binary number system are the
3.12 Conversion Between Binary and BCD Code 49

Table 3.8 Binary representation and their BCD equivalents


Decimal number Binary codes BCD codes
0 0000 00000
1 0001 00001
2 0010 00010
3 0011 00011
4 0100 00100
5 0101 00101
6 0110 00110
7 0111 00111
8 1000 01000
9 1001 01001
10 1010 10000
11 1011 10001
12 1100 10010
13 1101 10011
14 1110 10100
15 1111 10101

same. Often BCD codes are used to display the results of a digital system on LCD
display. In such case, conversion between binary and BCD codes is very important.
These conversion techniques are described below

3.12.1 Binary to BCD Conversion

After observing Table 3.8, it can be said that after digit 9, 6 is added to the decimal
number to get the digit in the BCD code. For example, 10 in the decimal system is
represented as 16 in the BCD code. Here, we will discuss well known double-dabble
[1] algorithm for binary to BCD conversion. This algorithm is also known as shift
and Add 3. A binary number is left shifted and if the shifted part under weightage of
One’s, Ten’s or Hundred’s is equal to or greater then ‘1012 ’ then 3 is added. Then
again apply a left shift and repeats the steps. This way the number is converted to
the BCD code. An example of this conversion technique is shown in Table 3.9.
The architecture for binary to BCD conversion is shown in Fig. 3.13. The Add-3
block adds 3 to a 4-bit number whenever the number is equal to or greater than 5.
The structure of this ADD-3 block is shown in Fig. 3.14. There are seven Add-3
blocks used in Fig. 3.13. A selection logic is there to select the number after addition
with 3.
50 3 Basic Combinational Circuits

Table 3.9 Example of binary to BCD code conversion


Operations Tens Ones Binary
Start 00111111
Shift 0 0111111
Shift 00 111111
Shift 0001 11111
Shift 0 0011 1111
Shift 00 0111 111
Add 3 00 1010 111
Shift 001 0101 11
Add 3 001 1000 11
Shift 0011 0001 1
Shift 0110 0011

Fig. 3.13 Binary number 0 0 x7 x6 x5 x4 x3 x2 x1 x0


system to BCD code
conversion circuit
Add-3

Add-3

Add-3

Add-3 Add-3

Add-3 Add-3

y 9 y 8 y 7 y 6 y 5 y4 y 3 y2 y 1 y 0

Fig. 3.14 The structure of


the Add-3 circuit 3 4
4-bit Adder 1
x
4 0

x2

x0
x1
x3
3.12 Conversion Between Binary and BCD Code 51

Fig. 3.15 An example of 00011


BCD to binary conversion
1010 2 × 1010
0001011
00101 4 × 1010
000111111

3.12.2 BCD to Binary Conversion

The conversion of BCD codes to Binary numbers is also important in digital circuits.
A BCD number x = 123, can be expressed as x = 123 = 1 × 100 + 2 × 10 + 3 ×
1. The conversion from a BCD number to binary number follows this philosophy. For
example, in the number BCD number x = 0001_1001 first four bits from the LSB
have the weightage of one and the next four bits have the weightage of Ten (1010).
Thus this number can be converted to binary as y = 1001 + (0001 × 1010) =
0001_0011. Another example of BCD to binary conversion is shown in Fig. 3.15.
Here, BCD number 63 (0110_0011) is converted to binary (00111111).
The BCD to Binary conversion circuit is shown in Fig. 3.16. The LSB is equal
for Binary and BCD codes. Here, two 4-bit adders are used. In the first adder, two
bits under the weightage of TEN are dissolved and in the second adder next two bits

x7 x6 x5 x4 x3 x2 x1 x0

b 3 b 2 b 1 b 0 a3 a 2 a 1 a 0

cout 4-bit Parallel Adder cin


0
s3 s2 s1 s0

b3 b2 b1 b0 a3 a 2 a 1 a0

cout 4-bit Parallel Adder cin 0

s3 s2 s1 s0

y6 y5 y 4 y3 y2 y 1 y0

Fig. 3.16 BCD code to binary conversion circuit


52 3 Basic Combinational Circuits

Table 3.10 Condition for adding multiple of TENs


x5 x4 b3 b2 b1 b0
0 0 0 0 0 0
0 1 0 1 0 1
1 0 1 0 1 0
1 1 1 1 1 1

are considered. Through the b input of the adder, multiple of weightages are added.
Condition for inputs to the b input is based on Table 3.10. Here for the first adder
b3 = b1 = x5 and b2 = b0 = x4 . Similarly, the conditions for the second 4-bit adder
are derived.

3.13 Parity Generators/Checkers

Parity check and generation is an error detection technique in the digital transmission
of bits. A parity bit is added to the data to make the number of ones either even or
odd. In even parity bit scheme, the parity bit is ‘0’ for even number of ones present
in the data and the parity bit is ‘1’ for odd number of ones in the data. If there are
even number of ones in the data then the parity bit becomes ‘1’ in odd parity bit
scheme and similarly the parity bit becomes ‘0’ for odd number of ones. Parity bit
is generally added in the MSB.
Realization of both types of parity for 4-bit data is described in Figs. 3.17 and
3.18. If evn_ parit y is ‘1’ then there are odd number of ones in the data. Three XOR
gates are used to calculate evn_ parit y. odd_ parit y is just invert of evn_ parit y
but a separate architecture is shown in Fig. 3.18. This is a balanced architecture and
has some advantages over the structure shown in Fig. 3.17. This balanced structure
will be discussed in detail in Chap. 15.

Fig. 3.17 Even parity a0


generation for-bit data a1
a2
a3
evn parity

Fig. 3.18 Odd parity a0


Generation for 4-bit data a1
a2
odd parity
a3
3.14 Comparators 53

3.14 Comparators

Comparison is a very important operation in implementation of any algorithm where


sorting operation is carried out or where operands are compared to find the maximum
or minimum. A comparator compares an operand (a) with another operand (b) to
check whether the first operand is equal to, less than or greater than the second
operand. In this section, design of a comparator block using hierarchical modelling
style is discussed. First the comparator will be designed for smaller bits and then
design of the higher order comparator will be discussed.
In a comparator, the operands are compared bit by bit. For example, for 16-bit
comparator comparison starts from the MSB and then all the bits are compared.
Thus first 1-bit comparator is discussed and then design of a 16-bit comparator is
discussed. The truth table for a 1-bit comparator is shown in Table 3.11. Here, from
the truth table of the 1-bit comparator it can be said that when the two bits are same,
the equal (=) output is high. This equality check can be done simply by a XNOR
gate. Similarly Boolean expressions for other output signals can be easily derived
and these are shown below

eq = (a ⊕ b) (3.21)
lt = a.b (3.22)
gt = a.b (3.23)

The architecture for 1-bit comparator is shown in Fig. 3.19. This is the optimized
block diagram of the 1-bit comparator where the XNOR gate is replaced with the

Table 3.11 Truth table of a 1-bit comparator


a b = < >
0 0 1 0 0
0 1 0 1 0
1 0 0 0 1
1 1 1 0 0

Fig. 3.19 Schematic of a a


1-bit comparator a>b

a=b

a<b
b
54 3 Basic Combinational Circuits

<
a[3]
=
b[3] COMP1
>

<
a[2] lt2
= lt1
b[2] COMP1
>

eq1 eq2

<
a[1]
=
b[1] COMP1
> gt1 gt2

<
a[0]
=
b[0] COMP1
>

Fig. 3.20 Architecture of a 4-bit comparator using 1-bit comparator

NOR gate. Now we can proceed to design of higher order comparators using the
1-bit comparators. Lets discuss design of a 4-bit comparator. Four 1-bit comparators
are required to design a 4-bit comparator. If all the bits are equal then only operands
a and b are equal. Thus to check equality, equal (=) outputs from all the comparators
are checked and ANDed. The output signals less than (<) or greater than (>) are
checked by comparing the operands a and b from the MSB side. If a3 > b3 then it
can be said that a > b. If a3 = b3 then the next bit is checked. If a3 = b3 and a2 > b2
then also a > b. This way the next bit is checked until LSB is faced. Similarly, the
less than operation is carried out.
The architecture of a 4-bit comparator is shown in Fig. 3.20. Here, four 1-bit
comparators are used. The comparator has three outputs lt2, eq2 and gt2. Three
more inputs are included to this block to support the hierarchical design. These
3.15 Constant Multipliers 55

a[3 : 0] a[7 : 4] a[11 : 8] a[15 : 12]


b[3 : 0] b[7 : 4] b[11 : 8] b[15 : 12]

COMP4 COMP4 COMP4 COMP4


t11 t21 t31 lt1
0 < < < <
= t12 = t22 = t32 = eq1
1
t13 t23 t33 gt1
0 > > > >

Fig. 3.21 Architecture of a 16-bit comparator using 4-bit comparator

inputs are lt1, eq1 and gt1. The 4-bit comparator block receives these inputs from
another block.
An architecture of a 16-bit comparator using 4-bit comparators is shown in
Fig. 3.21. Here, four 4-bit comparators are used. The 16-bit word length is parti-
tioned in four nibbles. Each nibble has four bits. The nibble from the MSB side is
compared first and then the next nibble is compared. The initial inputs for the 4-bit
comparator that receives the lower 4 bits are set as lt1 = 0, eq1 = 1 and gt1 = 0.
The input eq1 is facing an AND gate and thus it must be set to logic 1.

3.15 Constant Multipliers

In many signal processing or image processing applications constant parameters are


multiplied. These constants are fixed for a particular design. Then the complete mul-
tipliers are not required in those applications to multiply the constants. Basically,
the complete multipliers are avoided to multiply constants as they consume more
logic gates. An alternative is to use constant multipliers or scale blocks to realize the
multiplication with the fixed constants.
In evaluation of the equation b = ca, where b is the output, a is the input operand
and c is the constant parameter, constant multipliers are very useful. The constant
multipliers are constant specific means that the hardware specification varies from
constant to constant. Let’s consider an example where the input data a is divided by
the constant 3. In other words, input a is multiplied by 1/3 = 0.3333. This multipli-
cation process can be written as

b = a/c = a(2−2 + 2−4 + 2−6 + 2−8 ) (3.24)

Here, data width of 18-bit is considered and 9-bit is taken to represent the fractional
part. Here, when a operand a is multiplied by 2−i , then the operand is right shifted by
ith bit. This shifting operation is realized by wired shifting technique which does not
consume any logic element. This shifting operation is realized by directly connecting
the wires thus very fast. A schematic for hardware wire shifting for 1-bit right and
1-bit left is shown in Fig. 3.22.
56 3 Basic Combinational Circuits

Overflow
a[3] b[3] a[3] b[3]
a[2] b[2] a[2] b[2]
a[1] b[1] a[1] b[1]
a[0] b[0] a[0] b[0]
Remainder GND
(a) Schematic for 1-bit hardware right shift (b) Schematic for 1-bit hardware left shift
for 4-bit data width. for 4-bit data width.

Fig. 3.22 Schematic for wired shifting by 1-bit right and left for 4-bit data width

This wired shifting methodology supports both signed and unsigned numbers
represented in two’s complement representation. For example, for 0.5/2 the result
should be 0.25 and for −0.5/2 the result should be −0.25. This is why the MSB bit
is kept same for input operand and the output. The wired shift block for 1-bit right
shift is called here as RSH1 and for 1-bit left shift block is called here as LSH1.
The constant multiplication block that multiplies the input operand a by 0.3333
is shown in Fig. 3.23. Here, four right shift blocks are designed which are RSH2,
RSH4, RSH6 and RSH8. The input operand is shifted and added to obtain the final
result according to Eq. (3.24). Here, only three adders are used thus it can be said
that this constant multiplier block is hardware efficient than a complete multiplier.
The shift blocks shown in Fig. 3.23 are for fixed number of bits. But in many
applications variable shifting operations are required. These variable shift blocks
shift an input operand by a variable count. Before discussing the variable shift blocks,
first consider a block that shifts an input operand by 1-bit in the right side depending
on a control signal. If the control bit is high then the input operand is right shifted
by 1-bit otherwise the block passes the same input to the output. This type of block
is called Controlled RSH (CRSH) block for the right shift and called as Controlled

Fig. 3.23 Scheme for


constant multiplier
a[17 : 0]

RSH2 RSH4 RSH6 RSH8

Adder Adder

Adder

a/3
3.16 Frequently Asked Questions 57

Fig. 3.24 Scheme for


a[3] 0
controlled 1-bit right shift b[3]
a[3] 1

a[2] 0
b[2]
a[3] 1

a[1] 0
b[1]
a[2] 1

a[0] 0
b[0]
a[1] 1

Fig. 3.25 Scheme for


variable right shift a[17 : 0] b[17 : 0]
CRSH1 CRSH2 CRSH4 CRSH8

s[0] s[1] s[2] s[3]

LSH (CLSH) for left shift. Here, CRSH1 block is shown in Fig. 3.24 which shifts
the input operand by 1-bit if s is equal to 1. The CRSH block has a delay of a MUX
in the path.
Now, the variable shift blocks can be designed using the controlled shift blocks.
The variable shift block for right shift is called as Variable RSH (VRSH) and the
variable shift block for left shift is called as Variable LSH (VLSH). A diagram for
VRSH block is shown in Fig. 3.25. Here, this block is configured using CRSH1,
CRSH2, CRSH4 and CRSH8 blocks. This block is capable of shifting an operand
by any number from 0 to 15. Shifting by s = 0 means all the controlled shift blocks
are disabled and they pass the input data as it is. Shifting by s = 15 is achieved by
enabling all the blocks. The VRSH or VLSH block has a delay of a maximum of 4
MUXes connected in series. Thus has a speed limitation.

3.16 Frequently Asked Questions

Q1. Write a Verilog code in behavioural style for an 18-bit comparator?


A1. Realization of a comparator using behavioural model is very straightforward
forward as shown below
module comp18 ( A1 , B1 , LT1 , GT1,EQ1) ;
input [ 1 7 : 0 ] A1 , B1 ;
output reg LT1 , GT1,EQ1;
always @ ( A1 , B1 )
58 3 Basic Combinational Circuits

a a

f b f b

g g

e c e c

d d
(a) Seven Segments of a digit. (b) Number 2 is represented.

Fig. 3.26 Seven segment display and it representation

begin
i f ( A1>B1 )
begin
LT1 <= 0 ; GT1 <= 1 ; EQ1 <= 0 ;
end
else i f ( A1<B1 )
begin
LT1 <= 1 ; GT1 <= 0 ; EQ1 <= 0 ;
end
else
begin
LT1 <= 0 ; GT1 <= 0 ; EQ1 <= 1 ;
end
end
endmodule

Q2. Write Verilog Code to display BCD numbers on Seven Segment display?
A2. In order to display the BCD numbers, Seven Segment Display (SSD) is used.
SSD is an inbuilt feature of many FPGA kits. Seven segments together display any
number in the SSD. These seven segments can be used to display 27 = 128 number
of combinations but few combinations are used. Representation of a BCD number
using seven segments is shown in Fig. 3.26.
module segment7 (BCD,SEG) ;
input [ 3 : 0 ] BCD;
output reg [ 6 : 0 ] SEG;
always @(BCD)
begin
case (BCD)
0 : SEG = 7 ’ b1111110 ;
1 : SEG = 7 ’ b0110000 ;
2 : SEG = 7 ’ b1101101 ;
3 : SEG = 7 ’ b1111001 ;
4 : SEG = 7 ’ b0110011 ;
3.16 Frequently Asked Questions 59

1 I0 y I0
out = x̄ out = x + y

0 I1 I1

x x
(a) NOT using MUX. (b) OR using MUX.

I0 y I0
out = x.y out = x ⊕ y

y I1 ȳ I1

x x
(c) AND using MUX. (d) XOR using MUX.

Fig. 3.27 Realization of different gates using 2:1 MUX

5 : SEG = 7 ’ b1011011 ;
6 : SEG = 7 ’ b1011111 ;
7 : SEG = 7 ’ b1110000 ;
8 : SEG = 7 ’ b1111111 ;
9 : SEG = 7 ’ b1111011 ;
default : SEG = 7 ’ b0000000 ;
endcase
end
endmodule

Q3. Realize different logic gates using 2:1 MUX?


A3. Different logic gates can be realized using 2:1 MUX and it is very helpful in
FPGA implementation. The implementation of different logic functions using 2:1
MUX is shown in Fig. 3.27. For a 2:1 MUX, output is equal to I0 when the select
signal is 0 and output is equal to I1 when the select signal is 1.
13.3 Timing Checks 253

13.2.14 Required Time and Arrival Time

Required time is the time within which data is required to arrive at internal node
of flip-flop. Required time is constrained by the designers. The time in which data
arrives at the internal node is the arrival time. It incorporates all the net and logic
delays in between the reference input point and the destination node.

13.2.15 Timing Paths

The different kinds of paths when checking the timing of a design are as follows.
1. Flip-flop to flip-flop timing path.
2. External input device to on-chip flip-flop timing path.
3. On chip flip-flop to external output devices.
4. External input device to external output device through on-chip combinational
block.

13.3 Timing Checks

There may be two types of paths which can cause timing violation, one is max
path and the second one is min path. Two types of timing checks are needed to be
performed one is hold check for min path and setup check for max path. These two
checks are explained below.

13.3.1 Setup Timing Check

In a setup timing check, the timing relationship between the clock and the data pin
of a flip-flop is checked to observe that the setup timing requirement is met or not.
The setup check ensures that the data is stable for a certain amount of time, which
is the setup time of the flip-flop, before the active edge. This is done to ensure that
the capture flip-flop correctly captures the data. Figure 13.11 illustrates the setup
condition. The condition for setup check can be written as

Tlaunch + T pc2q + T pd < Tcaptur e + Tcycle − Tsu (13.3)


254 13 Timing Analysis

Tcycle

CLK
Tlaunch

Launch Edge

FF0/CLK

Tcapture
Capture Edge

Tsu
FF1/CLK

FF0/Q

Fig. 13.11 The concept of setup timing check for a sequential circuit

13.3.2 Hold Timing Check

A hold timing check is performed to ensure that the hold specification of flip-flop is
met. According to the hold specification of a flip-flop, the data should be stable for
a specified amount of time after the active edge of the clock. The Fig. 13.12 shows
the concept of hold timing check.

Tlaunch + Tcc2q + Tcd > Tcaptur e + Th (13.4)

13.4 Timing Checks for Different Timing Paths

The different types of paths are already discussed above. Now, setup and hold check-
ups are to be performed for each type of path. In this section, the timing checks are
performed for different types of paths.
13.4 Timing Checks for Different Timing Paths 255

Tcycle

CLK
Tlaunch

Launch Edge

FF0/CLK

Tcapture
Capture Edge

Th
FF1/CLK

FF0/Q

Fig. 13.12 The concept of Hold timing check for a sequential circuit

13.4.1 Setup Check for Flip-Flop to Flip-Flop Timing Path

In a flip-flop to flip-flop timing path, both the flip-flops can be connected to same or
different clock signal. Figure 13.6 demonstrates this timing path. Here, the data is
launched by the launch flip-flop and reaches another flip-flop through a combinational
circuit.

13.4.1.1 Computation of Setup Slack

In this case the required time and the arrival time are defined as

Required Time = Tclock + Tcaptur e − Tsu (13.5)

Arrival Time = Tlaunch + T pc2q + Tlogic + Tnet (13.6)

Lets consider an example to understand the setup slack calculation for flip-flop timing
path.

Example 13.1 The launching active clock edge occurred at 4.30 ns with Tlaunch =
0.20 ns. The clock signal has the period of 10 ns and Tcaptur e = 0.50 ns. The total
data path delay is 6.50 ns and the setup constant can be considered as Tsu = 0.46 ns.
6.4 Sequence Detector Using Moore Machine 107

Fig. 6.8 1010 sequence 1/0 0/1


detector using Mealy 0/0
machine for overlapping 1/0 0/0 1/0
style S0 S1 S2 S3

1/0 1/0

6.4 Sequence Detector Using Moore Machine

In this section, we will discuss implementation of the same 1010 sequence detector
but using Moore machine. In Moore type FSMs, we also can have two types of
sequence detectors which are non-overlapping and overlapping. The variation of the
output signal y in both the cases is shown in Fig. 6.9. It can be observed that the
output is one clock cycle delayed compared to the output of Mealy machine.
The state diagram for 1010 sequence detector using Moore machine in non-
overlapping style is shown in Fig. 6.10. Here inside the circle S0 /0 represents that in
the PS of S0 output is zero. The input signal (x) is written along the branches from
one state to another. The Moore machine needs extra states compared to the Mealy
machine and in these extra states (S4 ) output is 1. Here one extra state is used and
in this extra state, output is always 1. Thus the objective is to reach the output state
from any state. Considering the same input sequence as ‘011010’ then the sequence
of next states will be S0 S1 S1 S2 S3 S4 .
The 1010 sequence detector using Moore machine is also designed here using
K-map. The state table is shown in Table 6.4. The state table is formed according to
the state diagram shown in Fig. 6.10. The next step is to assign the steps. Here, four
states are used and thus minimum of three bits are needed to represent them. The
state assignment table is shown in Table 6.5.
The excitation table for 1010 sequence detector using the Moore machine in non-
overlapping style is shown in Table 6.6. Here, same D flip-flip is used. In comparison
to the Mealy machine, three flip-flops are required. Thus using the K-map optimiza-
tion, the Boolean expression for the input of the flip-flops and the output is derived.

clk

y Non-Overlapping

y Overlapping

Fig. 6.9 ‘1010’ sequence detector output for overlapping and non-overlapping case for Moore
machine
108 6 Finite State Machines

1 0
1
0
1 0 1 0
S0 /0 S1 /0 S2 /0 S3 /0 S4 /1

0 1

Fig. 6.10 State diagram for 1010 sequence detector using Moore machine in non-overlapping style

Table 6.5 State table with state assignments for sequence ‘1010’ detector using Moore Machine
in non-overlapping style
Present state Next state Next state
X=0 X=1 X=0 X=1
000 000 001 0 0
001 010 001 0 0
010 000 011 0 0
011 100 001 0 0
100 000 001 1 1

Table 6.6 Excitation table for 1010 sequence detector using Moore machine
Present state Input Next state F/F inputs Output
q2 q1 q0 X q2∗ q1∗ q0∗ d2 d1 d0 y
0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0 0 1 0
0 0 1 0 0 1 0 0 1 0 0
0 0 1 1 0 0 1 0 0 1 0
0 1 0 0 0 0 0 0 0 0 0
0 1 0 1 0 1 1 0 1 1 0
0 1 1 0 1 0 0 1 0 0 0
0 1 1 1 0 0 1 0 0 1 0
1 0 0 0 0 0 0 0 0 0 1
1 0 0 1 0 0 1 0 0 1 1
6.4 Sequence Detector Using Moore Machine 109

q 2 q1 q 0 x q 2 q1 q 0 x
00 01 11 10 00 01 11 10
00 0 1 1 0 00 0 0 0 1
01 0 1 1 0 01 0 1 0 0
11 X X X X 11 X X X X
10 0 1 X X 10 0 0 X X

(a) K-Map for d 0 . (b) K-Map for d 1 .


q 2 q1 q 0 x q 2 q1 q 0 x
00 01 11 10 00 01 11 10
00 0 0 0 0 00 0 0 0 0
01 0 0 0 1 01 0 0 0 0
11 X X X X 11 X X X X
10 0 0 X X 10 1 1 X X

(c) K-Map for d 2 . (d) K-Map for Output.

Fig. 6.11 K-Map for 1010 sequence detector using Moore machine

The K-map optimization is shown in Fig. 6.11. The don’t care conditions may be
used to generate optimized expressions but here these conditions are not considered.
The Boolean expressions without optimization are shown below.

do = q¯2 x + q¯1 q¯0 x (6.4)


d1 = q¯2 q1 q¯0 x + q¯2 q¯1 q0 x̄ (6.5)
d2 = q¯2 q1 q0 x̄ (6.6)
y = q2 q¯1 q¯0 (6.7)

The hardware realization of the 1010 sequence detector using Moore machine in
non-overlapping style is shown in Fig. 6.12. Here, three D flip-flops are used. The
output of this sequence detector is a function of only the present states. Thus this
sequence detector is strictly a More machine.
The above discussion was for sequence detector using Moore machine in non-
overlapping style. Similarly we can convert this detector to detect overlapping
sequences. The nature of the output was previously shown in Fig. 6.9. The state
diagram for overlapping style is shown in Fig. 6.13. In the PS of S4 if the input is 1,
then state transition occurs from S4 to S3 to search the nearest state for 0.
110 6 Finite State Machines

d2 q2
dff2

d1 q1
dff1

d0 q0
dff0
clk

output (y)

Fig. 6.12 Hardware implementation of 1010 sequence detector using Moore machine in non-
overlapping style

0
1
0
1 0 1 0
S0 /0 S1 /0 S2 /0 S3 /0 S4 /1

1
0 1

Fig. 6.13 State diagram of 1010 sequence detector using Moore machine in overlapping style
6.7 FSM-Based Vending Machine Design 113

Fig. 6.16 State diagram of reset


serial adder using Moore
FSM 01
00 11 10
S0 /0 S1 /0
00
01 01
10 10
11

01
S3 /1 00
S2 /1
10 11

Table 6.9 State table for serial adder using Moore machine
Present state Next state Output (sum)
ab = 00 ab = 01 ab = 10 ab = 11
S0 S0 S3 S3 S1 0
S1 S3 S1 S1 S2 0
S2 S3 S1 S1 S2 1
S3 S0 S3 S3 S1 1

Fig. 6.17 Implementation of a sum


serial adder using Moore
b Full DFF
FSM
Adder

cout
DFF
reset
clk

6.7 FSM-Based Vending Machine Design

Vending Machine is a practical example where FSM can be used. The ticket dis-
patcher unit at the stations, and the can drinks dispatcher at the shops are some
examples of Vending machines. Here in this chapter we will try to understand a sim-
ple Vending machine which dispatches a can of coke after deposition of 15 rupees.
The machine has only one hole to receive coins that means customers can deposit
one coin at a time. Also the machine receives only 10 (T) or 5 (F) rupee coins and it
doesn’t give any change. So the input signal x can take values like
1. x = 00, no coin deposited.
2. x = 01, 5 rupee coin (F) deposited.
3. x = 10, 10 rupee coin (T) deposited.
4. x = 11 (forbidden) Both coins can’t be deposited at the same time.
114 6 Finite State Machines

Fig. 6.18 State diagram for


the simple vending machine
reset
problem using FSM
00/0

10/0 S0
01/0

00/0

S1
00/0 01/0
01/0 00/0
10/0 10/0

S2
01/1
00/0

10/1
S3

Table 6.10 State table for FSM-based vending machine


Present state Next state Output
ab = 00 ab = 01 ab = 10 ab = 11
S0 S0 S3 S3 S1 0
S1 S3 S1 S1 S2 0
S2 S3 S1 S1 S2 1
S3 S0 S0 S0 S1 1

Also a customer can deposit 15 rupees as 10 + 5 = 15, 5 + 10 = 15 and 5 + 5 + 5 =


15. If more money is deposited than 15 then the machine will be in the same state
asking the customer to deposit right amount. The state diagram for the vending
machine is shown in Fig. 6.18. In order to get a can of drinks the customer has to
give 15 rupees. In terms of machine language, the objective is to reach the step S3
from the step S0 . Once the step S3 is reached the Vending machine dispatches a can
and asks the customer if he wants another. The architecture of the Vending machine
can be designed using the state table shown in Table 6.10.

You might also like