Digital Communications I: Modulation and Coding Course
Digital Communications I: Modulation and Coding Course
Lecture 12 2
Trellis of an example ½ Conv. code
Channel
Input
sequence
Codeword
sequence
Ui u1i,...,u
ji,...,u
ni
Branch rd
wo
(n coded
bits)
Z zi,...,z
,...,z
i 1
ji
ni
Demodulato
routputs
noutputs
per
Branch
d wor
for
Branch
di wor
Lecture 12 4
Soft and hard decision decoding
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides
no other information for the decoder such that
how reliable the decision is.
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Lecture 12 5
Soft and hard decision decoding …
ML soft-decisions decoding rule:
Choose the path in the trellis with minimum
Euclidean distance from the received
sequence
Lecture 12 6
The Viterbi algorithm
Lecture 12 7
Example of hard-decision Viterbi decoding
ˆ (100)
m
Z(
1110
1110
01
) ˆ
U (11 1011
0011
)
m (101
)
U( 111000
1011
)
0 2 2
1 3
2 0
1 1
1 2
1 0
0
0 3 2
0 1 1
1 2 Partial metric
0 0 3
0 2 S(ti ),ti
1
2 2
2 1 3
Branch metric
1
t1 t2 t3 t4 t5 t6
Lecture 12 8
Example of soft-decision Viterbi decoding
2 2 222 2 ˆ (101
m )
Z(
1
,,, , , 1
,,1, ,
1)
3 3 333 3 ˆ
U ( 111000
1011
)
m (101
)
U( 1110 00
1011
)
0 -5/3 -5/3
0 -5/3
-1/3 10/3
1/3 1/3
-1/3 14/3
0 1/3 1/3
5/3 5/3 5/3 8/3
1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3 13/3 S(ti ),ti
-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6
Lecture 12 9
Today, we are going to talk about:
The properties of Convolutional codes:
Free distance
Transfer function
Systematic Conv. codes
Catastrophic Conv. codes
Error performance
Interleaving
Concatenated codes
Error correction scheme in Compact disc
Lecture 12 10
Free distance of Convolutional codes
Distance properties:
Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:
Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
all-zero codeword.
This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and remerge to the all-zero
path.
It is called the minimum free distance or the free distance of
the code, denoted by dfreeordf
Lecture 12 11
Free distance …
t1 t2 t3 t4 t5 t6
Lecture 12 12
Transfer function of Convolutional codes
Transfer function:
Transfer function of generating function is a tool
which provides information about the weight
distribution of the codewords.
The weight distribution specifies weights of different paths
in the trellis (codewords) with their corresponding lengths
and amount of information.
T
(D
,L,
N
)
Di j l
L N
id
fjKl1
D,
L,N
:place
holders
i
:distance
of
the
path
from
the
all
-
zero
path
j
:number
of
branches
that
path
until
it
remerges
the
takes
to
the
all
-
zero
path
l
: weight
of
the
informatio
nbits
correspond
ing
to
the
path
Lecture 12 13
Transfer function …
a = 00 b = 10 c = 01 e = 00
D 2 LN DL D2L
DLN DL
d =11
DLN
Lecture 12 14
Transfer function …
Solve T
(D,L
,N)Xe/X
a
5
3
D
LN
T
(
D
,L
,
N
)
D5
L
3 6
N4
D
LN
26
D
L5
N
2
...
1
DL
(
1L)
N
One path with weight 5, length 3 and data weight of 1
One path with weight 6, length 4 and data weight of 2
One path with weight 5, length 5 and data weight of 2
Lecture 12 15
Systematic Convolutional codes
A Conv. Coder at rate k / n is systematic if the
k-input bits appear as part of the n-bits branch
word.
Input Output
Lecture 12 16
Catastrophic Convolutional codes
Catastrophic error propagations in Conv. code:
A finite number of errors in the coded bits cause as
infinite number of errors in the decoded data bits.
A Convolutional code is catastrophic if there is a
closed loop in the state diagram with zero
weight.
Systematic codes are not catastrophic:
At least one branch of output word is generated by
input bits.
Small fraction of non-systematic codes are
catastrophic.
Lecture 12 17
Catastrophic Conv. …
Example of a catastrophic Conv. code:
Assume all-zero codeword is transmitted.
Three errors happens on the coded bits such that the decoder
takes the wrong path abdd…ddce.
This path has 6 ones, no matter how many times stays in the
loop at node d.
It results in many erroneous decoded data bits.
10
Input Output
a 11 b 10 c 01 e
00 10 01 00
01 d 11
11
00
Lecture 12 18
Performance bounds for Conv. codes
Error performance of the Conv. codes is
analyzed based on the average bit error
probability (not the average codeword error
probability), because
Codewords with variable sizes due to different size
of input.
For large blocks, codeword error probability may
converges to one bit the bit error probability may
remain constant.
….
Lecture 12 19
Performance bounds …
Analysis is based on:
Assuming the all-zero codeword is transmitted
Evaluating the probability of an “error event”
(usually using bounds such as union bound).
An “error event” occurs at a time instant in the trellis if a
non-zero path leaves the all-zero path and remerges to it
at a later time.
Lecture 12 20
Performance bounds …
Bounds on bit error probability for
memoryless channels:
Hard-decision decoding:
dT
(
D,L
,N)
P
B
dN N1
,
L1
,
D
2p(
1p
)
Lecture 12 21
Performance bounds …
Error correction capability of Convolutional codes,
given by t(df 1)/2, depends on
If the decoding is performed long enough (within 3 to
5 times of the constraint length)
How the errors are distributed (bursty or random)
For a given code rate, increasing the constraint
length, usually increases the free distance.
For a given constraint length, decreasing the
coding rate, usually increases the free distance.
The coding gain is upper bounded
coding
gain
10
log
(
R
10cdf)
Lecture 12 22
Performance bounds …
Basic coding gain (dB) for soft-decision
Viterbi decoding
Uncoded Coderate 1/3 1/2
Eb/N 0
(dB) PB K 7 8 6 7
3
6.8 10 4.2 4.4 3.5 3.8
5
9.6 10 5.7 5.9 4.6 5.1
7
11 .3 10 6.2 6.5 5.3 5.8
Upper bound 7.0 7.3 6.0 7.0
Lecture 12 23
Interleaving
Convolutional codes are suitable for memoryless
channels with random error events.
Lecture 12 24
Interleaving …
Interleaving is done by spreading the coded
symbols in time (interleaving) before
transmission.
The reverse in done at the receiver by
deinterleaving the received sequence.
“Interleaving” makes bursty errors look like
random. Hence, Conv. codes can be used.
Types of interleaving:
Block interleaving
Convolutional or cross interleaving
Lecture 12 25
Interleaving …
Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected.
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors
Interleaver Deinterleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errors
Lecture 12 26
Concatenated codes
A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate).
Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and Reed-Solomon codes
as the outer code
The purpose is to reduce the overall complexity, yet
achieving the required error performance.
Channel
Output Outer Inner
Deinterleaver Demodulate
data decoder decoder
Lecture 12 27
Practical example: Compact disc
Lecture 12 28
Compact disc – cont’d
C2 D* C1 D
interleave encode interleave encode interleave
C2 D* C1 D
deinterleave decode deinterleave decode deinterleave
Decoder
Lecture 12 29