0% found this document useful (0 votes)
20 views

Digital Communications I: Modulation and Coding Course

This document summarizes a lecture on digital communications and convolutional codes. It discusses the Viterbi algorithm for maximum likelihood decoding of convolutional codes using both soft and hard decisions. It also covers properties of convolutional codes like free distance, transfer function, systematic codes, and catastrophic codes. Finally, it mentions interleaving, concatenated codes, and error correction in compact discs will be discussed.

Uploaded by

Basit Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Digital Communications I: Modulation and Coding Course

This document summarizes a lecture on digital communications and convolutional codes. It discusses the Viterbi algorithm for maximum likelihood decoding of convolutional codes using both soft and hard decisions. It also covers properties of convolutional codes like free distance, transfer function, systematic codes, and catastrophic codes. Finally, it mentions interleaving, concatenated codes, and error correction in compact discs will be discussed.

Uploaded by

Basit Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Digital Communications I:

Modulation and Coding Course


Last time, we talked about:
 How the decoding is performed for
Convolutional codes?
 What is a Maximum likelihood decoder?
 What are the soft decisions and hard
decisions?
 How does the Viterbi algorithm work?

Lecture 12 2
Trellis of an example ½ Conv. code

Input bits Tail bits


1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01
1/01
t1 t2 t3 t4 t5 t6
Lecture 12 3
Block diagram of the DCS

Information Rate 1/n


Modulator
source Conv. encoder

m(
m ,m,...,
m UG(m)
i,...)
 
1


2

(U1,U2,U3,...,
Ui,...)

Channel
Input
sequence
      

Codeword
sequence

Ui  u1i,...,u
ji,...,u
ni
    
Branch rd
wo
(n coded
bits)

Information Rate 1/n


Demodulator
sink Conv. decoder
ˆ
m (
mˆ1,m
ˆ2,...,
ˆi,...) Z
m (Z,Z,Z,...,
Zi,...)

1

2
3
 
received
sequence

Z zi,...,z
,...,z

i 1
 ji
 
ni
Demodulato
routputs
noutputs
per
Branch
d wor
for
Branch
di wor

Lecture 12 4
Soft and hard decision decoding
 In hard decision:
 The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides
no other information for the decoder such that
how reliable the decision is.

 In Soft decision:
 The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.

Lecture 12 5
Soft and hard decision decoding …
 ML soft-decisions decoding rule:
 Choose the path in the trellis with minimum
Euclidean distance from the received
sequence

 ML hard-decisions decoding rule:


 Choose the path in the trellis with minimum
Hamming distance from the received
sequence

Lecture 12 6
The Viterbi algorithm

 The Viterbi algorithm performs Maximum


likelihood decoding.

 It finds a path through trellis with the largest


metric (maximum correlation or minimum
distance).
 At each step in the trellis, it compares the partial
metric of all paths entering each state, and keeps
only the path with the largest metric, called the
survivor, together with its metric.

Lecture 12 7
Example of hard-decision Viterbi decoding
ˆ  (100)
m

Z(
1110
1110
01
) ˆ
U (11 1011
0011
)
m (101
)
U( 111000
1011
)
0 2 2
1 3
2 0
1 1
1 2

1 0
0
0 3 2
0 1 1
1 2 Partial metric
0 0 3
0 2 S(ti ),ti 
1
2 2
2 1 3
Branch metric
1
t1 t2 t3 t4 t5 t6

Lecture 12 8
Example of soft-decision Viterbi decoding

2 2 222  2 ˆ  (101
m )

Z(
1
,,, , , 1 
,,1, ,
1)
3 3 333 3 ˆ
U ( 111000
1011
)
m (101
)
U( 1110 00
1011
)
0 -5/3 -5/3
0 -5/3
-1/3 10/3
1/3 1/3
-1/3 14/3

0 1/3 1/3
5/3 5/3 5/3 8/3
1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3 13/3 S(ti ),ti 

-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6

Lecture 12 9
Today, we are going to talk about:
 The properties of Convolutional codes:
 Free distance
 Transfer function
 Systematic Conv. codes
 Catastrophic Conv. codes
 Error performance
 Interleaving
 Concatenated codes
 Error correction scheme in Compact disc

Lecture 12 10
Free distance of Convolutional codes
 Distance properties:
 Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:
 Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
all-zero codeword.
 This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and remerge to the all-zero
path.
 It is called the minimum free distance or the free distance of
the code, denoted by dfreeordf

Lecture 12 11
Free distance …

The path diverging and remerging to Hamming weight


All-zero path
all-zero path with minimum weight of the branch
df 5
0 0 0 0 0
2 2 2
2 2 2
1 0
1 1
1 1
1 1

t1 t2 t3 t4 t5 t6

Lecture 12 12
Transfer function of Convolutional codes

 Transfer function:
 Transfer function of generating function is a tool
which provides information about the weight
distribution of the codewords.
 The weight distribution specifies weights of different paths
in the trellis (codewords) with their corresponding lengths
and amount of information.
T
(D
,L,
N 
) 

Di j l
L N

id 
fjKl1

D,
L,N
:place
holders
i
:distance
of
the
path
from
the
all
-
zero
path
j
:number
of
branches
that
path
until
it
remerges
the
takes
to
the
all
-
zero
path
l
: weight
of
the
informatio
nbits
correspond
ing
to
the
path

Lecture 12 13
Transfer function …

 Example of transfer function for the rate ½


Convolutonal code.
1. Redraw the state diagram such that the zero state is
split into two nodes, the starting and ending nodes.
2. Label each branch by the corresponding D i L j N l
LN

a = 00 b = 10 c = 01 e = 00
D 2 LN DL D2L
DLN DL
d =11

DLN

Lecture 12 14
Transfer function …

 Write the state equations ( Xa ,...,Xe dummy variables)


Xb D2LNX a LNXc

Xc DLXb DLX d

Xd DLNX b DLNX d
X D2LX
 e c

 Solve T
(D,L
,N)Xe/X
a
5
3
D
LN
T
(
D
,L
,
N
) 
D5
L
3 6
N4
D
LN
26
D
L5
N
2
...

1
DL
(
1L)
N
One path with weight 5, length 3 and data weight of 1
One path with weight 6, length 4 and data weight of 2
One path with weight 5, length 5 and data weight of 2

Lecture 12 15
Systematic Convolutional codes
 A Conv. Coder at rate k / n is systematic if the
k-input bits appear as part of the n-bits branch
word.

Input Output

 Systematic codes in general have smaller free


distance than non-systematic codes.

Lecture 12 16
Catastrophic Convolutional codes
 Catastrophic error propagations in Conv. code:
 A finite number of errors in the coded bits cause as
infinite number of errors in the decoded data bits.
 A Convolutional code is catastrophic if there is a
closed loop in the state diagram with zero
weight.
 Systematic codes are not catastrophic:
 At least one branch of output word is generated by
input bits.
 Small fraction of non-systematic codes are
catastrophic.

Lecture 12 17
Catastrophic Conv. …
 Example of a catastrophic Conv. code:
 Assume all-zero codeword is transmitted.
 Three errors happens on the coded bits such that the decoder
takes the wrong path abdd…ddce.
 This path has 6 ones, no matter how many times stays in the
loop at node d.
 It results in many erroneous decoded data bits.

10

Input Output
a 11 b 10 c 01 e
00 10 01 00
01 d 11
11
00
Lecture 12 18
Performance bounds for Conv. codes
 Error performance of the Conv. codes is
analyzed based on the average bit error
probability (not the average codeword error
probability), because
 Codewords with variable sizes due to different size
of input.
 For large blocks, codeword error probability may
converges to one bit the bit error probability may
remain constant.
 ….

Lecture 12 19
Performance bounds …
 Analysis is based on:
 Assuming the all-zero codeword is transmitted
 Evaluating the probability of an “error event”
(usually using bounds such as union bound).
 An “error event” occurs at a time instant in the trellis if a
non-zero path leaves the all-zero path and remerges to it
at a later time.

Lecture 12 20
Performance bounds …
 Bounds on bit error probability for
memoryless channels:
 Hard-decision decoding:
dT
(
D,L
,N)

P
B
dN N1
,
L1
,
D 
2p(
1p
)

 Soft decision decoding on AWGN channels using


BPSK
E E
dT
(
D,
L,
N)

P
Q2
d
c
c
exp
d 
BfN
 
f 
 0
 N
0dN 
N
1
,
L
1,
D
exp(
E
c/
N)
0

Lecture 12 21
Performance bounds …
 Error correction capability of Convolutional codes,
given by t(df 1)/2, depends on
 If the decoding is performed long enough (within 3 to
5 times of the constraint length)
 How the errors are distributed (bursty or random)
 For a given code rate, increasing the constraint
length, usually increases the free distance.
 For a given constraint length, decreasing the
coding rate, usually increases the free distance.
 The coding gain is upper bounded

coding
gain
10
log
(
R
10cdf)

Lecture 12 22
Performance bounds …
 Basic coding gain (dB) for soft-decision
Viterbi decoding
Uncoded Coderate 1/3 1/2
Eb/N 0

(dB) PB K 7 8 6 7
3
6.8 10 4.2 4.4 3.5 3.8
5
9.6 10 5.7 5.9 4.6 5.1
7
11 .3 10 6.2 6.5 5.3 5.8
Upper bound 7.0 7.3 6.0 7.0

Lecture 12 23
Interleaving
 Convolutional codes are suitable for memoryless
channels with random error events.

 Some errors have bursty nature:


 Statistical dependence among successive error events
(time-correlation) due to the channel memory.
 Like errors in multipath fading channels in wireless
communications, errors due to the switching noise, …

 “Interleaving” makes the channel looks like as a


memoryless channel at the decoder.

Lecture 12 24
Interleaving …
 Interleaving is done by spreading the coded
symbols in time (interleaving) before
transmission.
 The reverse in done at the receiver by
deinterleaving the received sequence.
 “Interleaving” makes bursty errors look like
random. Hence, Conv. codes can be used.
 Types of interleaving:
 Block interleaving
 Convolutional or cross interleaving

Lecture 12 25
Interleaving …
 Consider a code with t=1 and 3 coded bits.
 A burst error of length 3 can not be corrected.

A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors

 Let us use a block interleaver 3X3


A1 A2 A3 B1 B2 B3 C1 C2 C3 A1 B1 C1 A2 B2 C2 A3 B3 C3

Interleaver Deinterleaver

A1 B1 C1 A2 B2 C2 A3 B3 C3 A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors 1 errors 1 errors

Lecture 12 26
Concatenated codes
 A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate).
 Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and Reed-Solomon codes
as the outer code
 The purpose is to reduce the overall complexity, yet
achieving the required error performance.

Input Outer Inner


Interleaver Modulate
data encoder encoder

Channel
Output Outer Inner
Deinterleaver Demodulate
data decoder decoder

Lecture 12 27
Practical example: Compact disc

“Without error correcting codes, digital audio


would not be technically feasible.”

 Channel in a CD playback system consists of a


transmitting laser, a recorded disc and a photo-
detector.
 Sources of errors are manufacturing damages,
fingerprints or scratches
 Errors have bursty like nature.
 Error correction and concealment is done by using a
concatenated error control scheme, called cross-
interleaver Reed-Solomon code (CIRC).

Lecture 12 28
Compact disc – cont’d

 CIRC encoder and decoder:


Encoder

 C2 D* C1 D
interleave encode interleave encode interleave

 C2 D* C1 D
deinterleave decode deinterleave decode deinterleave

Decoder

Lecture 12 29

You might also like