0% found this document useful (0 votes)
11 views

Book Chapter03 Part06

Uploaded by

kk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Book Chapter03 Part06

Uploaded by

kk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

144 Chapter 3 / Boolean Algebra and Digital Logic

3.6 / Sequential Circuits 145

Eel (emitter coupled logic) gates are used in situations that require
extremely high speeds. Whereas TTl 'and MOS- LIse transistors as .digital
swiKhes(;~Re transt?~pr is eitherr2Cltw:ated.;>Or,cut ofr}.'~f~CL uses tl{ansistors. La.
guide curtent throLl,gtl,gates. reg>~lting in trar;:isistorstn~}t are never completely
turned off nor completely saturated. Because they an: always in an active sta-
tus. the transistors can change states very quickly. However. the trade-off for
this high speed is substantial power requiremen.ts. Therefore. Eel is used only
rarely. in v.ery specialized applications.
A newcomer to the logic family scene. Bi~MOS (bipolar CMOS) integrated
circuits use both the bipolar and CMOS techn0Iogies._Despite the fact that BiC-
MOS logic! consumes Lf('"Hxepower than TIL .. i,e:· is con'E?,ie;terably
faster. AI tnough
,r\:;" ,~ -:-::{~~' _ _. __:-*:-~ - .:. -} - _ ~.- ," -
._x --,"'-'

not curreil'tLyused in·~:anufacturin'g. E3iCMOS-appears to have great potential. >')h

3.6.6 An Application of Sequential Logic: Convolutional Coding and


Viterbi Detection
The special section that follows Chapter 2 describes several coding methods
employed in data storage and communication. One of them is the partial response
maximum likelihood (PRML) encoding method. Our previous discussion (which
isn't prerequisite for understanding this section) concerned the "partial response"
component of PRML. The "maximum likelihood" component derives from the
way that bits are encoded and decoded. The salient feature of the decoding
process is that only certain bit patterns are valid. These patterns are produced
using a convolutional code. A Viterbi decoder reads the bits that have been out-
put by a convolutional encoder and compares the symbol stream read with a set
of "probable" symbol streams. The one with the least error is selected for output.
We present this discussion because it brings together a number of concepts from
this chapter as well as Chapter 2. We begin with the encoding process.
The Hamming code introduced in Chapter 2 is a type of forward error correc-
tion that uses blocks of data (or block coding) to compute the necessary redun-
dant bits. Some applications require a coding technique suitable for a continuous
stream of data, such as that from a satellite television transmitter. Convolutional
coding is a method that operates on an incoming serial bit stream, generating an
encoded serial output stream (including redundant bits) that enables it to correct
errors continuously. A convolutional code is an encoding process whereby the
output is a function of the input and some number of bits previously received.
Thus, the input is overlapped, or convoluted, over itself to form a stream of out-
put symbols. In a sense, a convolutional code builds a context for accurate decod-
ing of its output. Convolutional encoding combined with Viterbi decoding has
become an accepted industry standard for encoding and decoding data stored or
transmitted over imperfect (noisy) media.
The convolutional coding mechanism used in PRML is illustrated in Figure
3.33. Careful examination of this circuit reveals that two output bits are written
for each input bit. The first output bit is a function of the input bit and the second
146 Chapter 3 / Boolean Algebra and Digital Logic

AEBCEBB

Input. ,------,
D
0 ,-----,
® ,------,
Q 1----*-1 D
®
1-----'Q I---~ D Q
C C C

Q I--t---------'

Q
.rt.rtn.,
Clock

FIGURE 3.33 Convolutional Encoder for PRML

previous input bit: A XOR C. The second bit is a function of the input bit and the
two previous bits: A XOR C XOR B. The two AND gates at the right-hand side of
the diagram alternatively select one of these functions during each pulse of the
clock. The input is shifted through the D flip-flops on every second clock pulse.
We note that the leftmost flip-flop serves only as a buffer for the input and isn't
strictly necessary.
At first glance, it may not be easy to see how the encoder produces two out-
put bits for every input bit. The trick has to do with the flip-flop situated between
the clock and the other components of the circuit. When the complemented out-
put of this flip-flop is fed back to its input, the flip-flop alternately stores Os and
Is. Thus, the output goes high on every other clock cycle, enabling and disabling
the correct AND gate with each cycle.
We step through a series of clock cycles in Figure 3.34. The initial state of the
encoder is assumed to contain all Os in the flip-flops labeled A, B, and C. A cou-
ple of clock cycles are required to move the first input into the A flip-flop
(buffer), and the encoder outputs two zeros. Figure 3.34a shows the encoder with
the first input (1) after it has passed to the output of flip-flop A. We see that the
clock on flip-flops A, B, and C is enabled, as is the upper AND gate. Thus, the
function A XOR C is routed to the output. At the next clock cycle (Figure 3.34b),
the lower AND gate is enabled, which routes the function A XOR C XOR B to
the output. However, because the clock on flip-flops A, B, and C is disabled, the
input bit does not propagate from flip-flop A to flip-flop B. This prevents the next
input bit from being consumed while the second output bit is written. At clock
cycle 3 (Figure 3.34c), the input has propagated through flip-flop A, and the bit
that was in flip-flop A has propagated to flip-flop B. The upper AND gate on the
output is enabled and the function A XOR C is routed to the output.
The characteristic table for this circuit is given in Table 3.13. As an example,
consider the stream of input bits, 11010010. The encoder initially contains all Os,
3.6 / Sequential Circuits 1Li 7

(a)

100

~
Clock Cycle 1

(b)

11 00

~
Clock Cycle 2

(c)

1 11 00

.rt.ri.ri.ri,
Clock Cycle 3

(d)

01 11 00

~
Clock Cycle 4

FIGURE 3.34 Stepping Through Four Clock Cycles of a Convolutional Encoder


148 Chapter 3 / Boolean Algebra and Digital Logic

. Input
Current
Next State
-
~ I~ State . Output
A BC .'''':
. ie; ,:~c~ B C" '. , ",,,i .

0 00 00 00
1 00 10 11
0 01 00 11
1 01 10 00
0 10 01 10
1 10 11 01
0 11 01 01
1 11 11 10

.TABLE 3.13 Characteristic Table for the Convolutional Encoder in Figure 3.33

so B = 0 and C = O. We say that the encoder is in State 0 (002), When the leading
1 of the input stream exits the buffer, A, B = 0 and C = 0, giving (A XOR C XOR
B) = 1 and (A XOR C) = 1. The output is 11 and the encoder transitions to State 2
(102), The next input bit is 1, and we have B = 1 and C = 0 (in State 2), giving (A
XOR C XOR B) = 0 and (A XOR C) = 1. The output is 01 and the encoder transi-
tions to State 1 (012). Following this process over the remaining six bits, the com-
pleted function is:
F(llOI 0010) = 11 01 01 00 10 11 11 10
The encoding process is made a little clearer using the Mealy machine (Fig-
ure 3.35). This diagram informs us at a glance as to which transitions are possible
and which are not. You can see the correspondence between Figure 3.35 machine
and the characteristic table by reading the table and tracing the arcs or vice versa.
The fact that there is a limited set of allowable transitions is crucial to the error
. correcting properties of this code and to the operation of the Viterbi decoder,

FIGURE 3.35 Mealy Machine for the Convolutional Encoder in Figure 3.33
3.6 / Sequential Circuits 149

FIGURE 3.36 Mealy Machine for a Convolutional Decoder

which is responsible for decoding the stream of bits correctly. By reversing the
inputs with the outputs on the transition arcs, as shown in Figure 3.36, we place
bounds around the set of possible decoding inputs.
For example, suppose the decoder is in State 1 and sees the pattern, 00 01.
The decoded bit values returned are 1 1, and the decoder ends up in State 3. (The
path traversed is 1 ~ 2 -7 3.) If on the other hand, the decoder is in State 2 and
sees the pattern, 00 11, an error has occurred because there is no outbound transi-
tion on State 2 for 00. The outbound transitions on State 2 are 01 and 10. Both of
these have a Hamming distance of 1 from OO~If we follow both (equally likely)
paths out of State 2, the decoder ends up either in State 1 or State 3. We see that
there is no outbound transition on State 3 for the next pair of bits, 11. Each out-
bound transition from State 3 has a Hamming distance of 1 from 11. This gives an
accumulated Hamming distance of 2 for both paths: 2 ~ 3 -7 1 and 2 ~ 3 ~ 2.
However, State 1 has a valid transition on 11. By taking the path, 2 -7 1 -70, the
accumulated error is only 1-, so this is the most likely sequence. The input there-
fore decodes to 00 with maximum likelihood.
An equivalent (and probably clearer) way of expressing this idea is through
the trellis diagram, shown in Figure 3.37. The four states are indicated on the left
side of the diagram. The transition (or time) component reads from left to right.
Every code word in a convolutional code is associated with a unique path in the
trellis diagram. A Viterbi detector uses the logical equivalent of paths through this
diagram to determine the most likely bit pattern. In Figure 3.37, we show the
state transitions that occur when the input sequence 00 10 11 11 is encountered
with the decoder starting in State 1. You can compare the transitions in the trellis
diagram with the transitions in the Mealy diagram in Figure 3.36.
Suppose we introduce an error in the first pair of bits in our input, giving the
erroneous string 10 10 11 11. With our decoder starting in State 1 as before, Fig-
ure 3.38 traces the possible paths through the trellis. The accumulated Hamming
150 Chapter 3 / Boolean Algebra and Digital Logic

Input 00 10 11 11

3 • • • • •
Output 1 o o
FIGURE 3.37 Trellis Diagram illustrating State Transitions for the Sequence
00 10 11 11

Input

3
Output

FIGURE 3.38 Trellis Diagram lllustrating Hamming Errors for the Sequence
10101111

distance is shown on each of the transition arcs. The correct path that correctly
assumes that the string should be 00 10 11 11 is the one having the smallest accu-
mulated error, so it is accepted as the correct sequence.
In most cases where it is applied, the Viterbi decoder provides only one level
of error correction. Additional error correction mechanisms such as cyclic redun-
dancy checking and Reed-Soloman coding (discussed in Chapter 2) are applied
after the Viterbi algorithm has done what it can to produce a clean stream of sym-
bols. All these algorithms are.usually implemented in hardware for utmost speed
using the digital building blocks described in this chapter.

You might also like