0% found this document useful (0 votes)
46 views

Principles of Communications: Convolutional Codes

The document discusses convolutional codes, which are a type of error correcting code used in communication systems. Convolutional codes operate on a serial data stream rather than blocks, using a shift register to map input bits to output codewords. The key concepts covered include the generator polynomials that describe each code path, encoding an input sequence using the state diagram, and the trellis diagram representation of the encoding process.

Uploaded by

Bui Trung Hieu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Principles of Communications: Convolutional Codes

The document discusses convolutional codes, which are a type of error correcting code used in communication systems. Convolutional codes operate on a serial data stream rather than blocks, using a shift register to map input bits to output codewords. The key concepts covered include the generator polynomials that describe each code path, encoding an input sequence using the state diagram, and the trellis diagram representation of the encoding process.

Uploaded by

Bui Trung Hieu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

Ho Chi Minh University of Natural Sciences

Faculty of Electronics and Telecommunications

Principles of Communications
Convolutional codes

By: Dang Quang Vinh

09/2008 1
Introduction
 In block coding, the encoder accepts k-
bit message block and generates n-bit
codewordBlock-by-block basis
 Encoder must buffer an entire message
block before generating the codeword
 When the message bits come in serially
rather than in large blocks, using buffer
is undesirable
 Convolutional coding
2
Definitions
 An convolutional encoder: a finite-state machine that
consists of an M-stage shift register, n modulo-2 adders
 L-bit message sequence produces an output sequence
with n(L+M) bits
 Code rate:
L
r (bits/symbol)
n( L  M )
 L>>M, so
1
r (bits/symbol)
n

3
Definitions

 Constraint length (K): the number of


shifts over which a single message bit
influence the output
 M-stage shift register: needs M+1 shifts
for a message to enter the shift register
and come out
 K=M+1

4
Example
 Convolutional code (2,1,2)
 n=2: 2 modulo-2 adders or 2 outputs
 k=1: 1 input
 M=2: 2 stages of shift register (K=M+1=2+1=3)

Path 1

Input
Output

Path 2
5
Example
 Convolutional code (3,2,1)
 n=3: 3 modulo-2 adders or 3 outputs

 k=2: 2 input

 M=1: 1 stages of each shift register (K=2 each)

Output
Input

6
Generations
 Convolutional code is nonsystematic code
 Each path connecting the output to the input can be
characterized by impulse response or generator
polynomial
 ( g M( i ) ,..., g 2( i ) , g1( i ) , g 0(i ) ) denoting the impulse response of
the ith path
 Generator polynomial of the ith path:
g ( i ) ( D)  g M(i ) D M  ...  g 2(i ) D 2  g1( i ) D  g 0(i )
 D denotes the unit-delay variabledifferent from X of
cyclic codes
 A complete convilutional code described by a set of
polynomials {g ( D), g ( D),..., g ( D)}
(1) ( 2) (n)

7
Example(1/8)

 Consider the case of (2,1,2)


 Impulse response of path 1 is (1,1,1)
 The corresponding generator polynomial is
g (1) ( D)  D 2  D  1
 Impulse response of path 2 is (1,0,1)
 The corresponding generator polynomial is
g ( 2 ) ( D)  D 2  1
 Message sequence (11001)
 Polynomial representation: m ( D )  D 4
 D 3
1

8
Example(2/8)
 Output polynomial of path 1:
c (1) ( D)  m( D) g (1) ( D)
 ( D 4  D 3  1)( D 2  D  1)
 D6  D5  D 4  D5  D 4  D3  D 2  D  1
 D6  D3  D 2  D  1

 Output sequence of path 1 (1001111)


 Output polynomial of path 2:
c ( 2 ) ( D )  m( D ) g ( 2 ) ( D )
 ( D 4  D 3  1)( D 2  1)
 D6  D 4  D5  D3  D 2  1

 Output sequence of path 2 (1111101)


9
Example(3/8)

 m= (11001)
 c(1)=(1001111)
 c(2)=(1111101)
 Encoded sequence c=(11,01,01,11,11,10,11)
 Message length L=5bits
 Output length n(L+K-1)=14bits
 A terminating sequence of K-1=2 zeros is
appended to the last input bit for the shift
register to be restored to its zero initial state
10
Example(4/8)
 Another way to calculate the output:
 Path 1:
m 111 output
001001 1 1
00100 11 0
0010 011 0
001 001 1 1
00 100 11 1
0 010 011 1
001 0011 1
c(1)=(1001111) 11
Example(5/8)
m 101 output
 Path 2
001001 1 1
00100 11 1
0010 011 1
001 001 1 1
00 100 11 1
0 010 011 0
001 0011 1

c(2)=(1111101)
12
Example(6/8)
 Consider the case of (3,2,1)

Output
Input

 denoting the impulse


g i( j )  ( g i(, Mj ) , g i(, Mj ) 1 ,..., g i(,1j ) , g i(, 0j ) )
response of the jth path corresponding to ith
input
13
Example(7/8)

Output
Input

g1(1)  (11)  g1(1) ( D)  D  1


g 2(1)  (01)  g1(1) ( D)  1
g1( 2)  (01)  g1( 2) ( D)  1
g 2( 2)  (10)  g 2( 2) ( D)  D
g1(3)  (11)  g1(1) ( D)  D  1
g 2(3)  (10)  g1(1) ( D)  D 14
Example(8/8)
 Assume that:
 m(1)=(101)m(1)(D)=D2+1

 m(2)=(011)m(1)(D)=D+1

 Outputs are:
 c(1)=m(1)*g1(1)+m(2)*g2(1)
= (D2+1)(D+1)+(D+1)(1)
=D3+D2+D+1+D+1=D3+D2c(1)=(1100)
 c(2)=m(1)*g1(2)+m(2)*g2(2)
= (D2+1)(1)+(D+1)(D)
=D2+1+D2+D=D+1 c(2)=(0011)
 c(3)=m(1)*g1(3)+m(2)*g2(3)
= (D2+1)(D+1)+(D+1)(D)
=D3+D2+D+1+D2+D=1=D3+1 c(3)=(1001)
 Output c=(101,100,010,011) 15
State diagram
Consider convolutional code (2,1,2) state Binary description
1/10

d a 00
11 b 10
c 01
1/01 0/01 d 11
0/10
 4 possible states
b 10 01 c  Each node has 2 incoming
branches, 2 outgoing branches
1/00  A transition from on state to
0/11 another in case of input 0 is
1/11 represented by a solid line and
of input 1 is represented by
dashed line
00  Output is labeled over the
a transition line
0/00 16
1/10

Example d
11

 Message 11001 1/01 0/01


0/10
 Start at state a
 Walk through the b 10 01 c
state diagram in 1/00
accordance with 0/11
1/11
message sequence
00
a
0/00
Input 1 1 0 0 1 0 0

State 00 10 11 01 00 10 01 00
a b d c a b c a
Output 11 01 01 11 11 10 11
17
Trellis(1/2)
0/00 0/00 0/00 0/00 0/00 0/00 0/00 0/00
a=00

1
1

0/1
0/1

0/1

0/1

0/1

0/1
1/

1/

1/

1/

1/
1/

11

11

11

11

11
11

b=10 00

00

00

00
1/

1/

1/

1/
0/

0/

0/

0/

0/

0/
10

10

10

10

10
0
c=01
01

01

01

01
01
0/

0/

0/

0/

0/
1/01

1/01

1/01

1/01

1/01
d=11
1/10 1/10 1/10 1/10
Level j=0 1 2 3 4 5 L-1 L L+1 L+2

18
Trellis(1/2)
 The trellis contains (L+K) levels
 Labeled as j=0,1,…,L,…,L+K-1
 The first (K-1) levels correspond to the
encoder’s departure from the initial state a
 The last (K-1) levels correspond to the
encoder’s return to state a
 For the level j lies in the range K-1jL, all
the states are reachable

19
Example
 Message 11001
Input 1 1 0 0 1 0 0

0/00 0/00 0/00 0/00 0/00 0/00 0/00


a=00

1
1

1
1
0/1

0/1

0/1

0/1
0/1
1/

1/

1/

1/
1/

11

11

11

11
11

b=10
00

00

00
1/

1/

1/
0/

0/

0/

0/

0/
10

10

10

10

10
c=01
01

01

01
01

0/

0/

0/
0/
1/01

1/01

1/01

1/01

d=11
1/10 1/10 1/10
Level j=0 1 2 3 4 5 6 7

Output 11 01 01 11 11 10 11 20
Maximum Likelihood Decoding
of Convolutional codes
 m denotes a message vector
 c denotes the corresponding code vector
 r denotes the received vector
 With a given r , decoder is required to make estimate m̂
of message vector, equivalently produce an estimate ĉ
of the code vector
 mˆ  m only if cˆ  c otherwise, a decoding error
happens
 Decoding rule is said to be optimum when the
propability of decoding error is minimized
 The maximum likelihood decoder or decision rule is
described as follows:
 Choose the estimate ĉ for which the log-likelihood

function logp(r/c) is maximum 21


Maximum Likelihood Decoding
of Convolutional codes
 Binary symmetric channel: both c and r are binary
sequences of length N
N
p(r | c)   p (ri | ci )
i 1
N
 log p (r | c)   log p (ri | ci )
i 1

 p if ri  ci
with p (ri | ci )  
1  p if ri  ci

 r differs from c in d positions, or d is the Hamming


distance between r and c
 log p (r | c)  d log p  ( N  d ) log(1  p)
 p 
 d log   N log(1  p )
 1  p 
22
Maximum Likelihood Decoding
of Convolutional codes

 Decoding rule is restated as follows:


 Choose the estimate ĉ that minimizes the

Hamming distance between the received vector r


and the transmitted vector c
 The received vector r is compared with each
possible code vector c, and the one closest
to r is chosen as the correct transmitted
code vector

23
The Viterbi algorithm

 Choose a path in the trellis whose


coded sequence differs from the
received sequence in the fewest
number of positions

24
The Viterbi algorithm

 The algorithm operates by computing a


metric for every possible path in the trellis
 Metric is Hamming distance between coded
sequence represented by that path and
received sequence
 For each node, two paths enter the node, the
lower metric is survived. The other is
discarded
 Computation is repeated every level j in the
range K-1 jL
 Number of survivors at each level 2K-1=4
25
The Viterbi algorithm
 c=(11,01,01,11,11,10,11),r=(11,00,01,11,10,10,11)
Input 11 00 01 11 10 10 11

0/00 2 0/00
2 0/00
3 2 0/00 4 1 0/00 4 2 0/00
3 0/00 5
a=00
2

1
1

1
1

0/1

0/1
0/1

0/1
0/1
1/

1/

1/

1/
1/

11

11

11

11
11

0 4 32 32 24
b=10
00

00

00
1/

1/

1/
1
0/

0/

0/

0/

0/
10

43
10

10

6 10 25

10
c=01 25
1
01

01

01
01

0/

0/

0/
0/
1/01

1 /0 1

1 /0 1

1 /0 1

1 4
d=11 3
1/10 4 3 1/10 4 3 1/10
Code 11 01 01 11 11 10 11
26
Output 1 1 0 0 1 0 0
Free distance of a conv. code
 Performance of a conv. code depends on
decoding algorithm and distance properties of
the code.
 Free distance, denoted by dfree, is a measure of
code’ s ability to combat channel noise
 Free distance: minimum Hamming distance
between any two codewords in the code
 dfree>2t
 Since a convolutional code doesn't use blocks,
processing instead a continuous bitstream, the
value of t applies to a quantity of errors located
relatively near to each other 27
Free distance of a conv. code
 Conv. code has linear property
 So, free distance also defined:

d free [ w( X )]min X  000....

 Calculate dfree by a generating function


 Generating function viewed the transfer
function of encoder
 Relating input and output by convolution
 Generation func relating initial and final state
by multiplication
 Free distance
 Decoding error probability
28
Free distance of a conv. code
 Modify state diagram
1/10

d DL
11

d
1/01 0/01
DL D
0/10
D
DL
2
D2
b 10 01 c c
a0 b a1
L
1/00
Signal-flow graph
0/11
1/11
Exponent of D: Hamming weight of encoder
output on that branch.
00
a Exponent of L: number of nonzero message
bits 29
0/00
Free distance of a conv. code
 State equations: b  D 2 La0  Lc
c  Db  Dd
DL
d  DLb  DLd
a1  D 2 c
d a0,b,c,d,a1: node signals of the graph
DL D Solve the equation set for a1/a0. So
D the generating func is:
D2L D2
a0 b c a1
L


a1 D5 L
T ( D, L )    D L  (2 DL) i
5

a0 1  2 DL i 0

T ( D, L)  D L  2 D L  4 D L  ...   2 d 5 D d Ld  4
5 6 2 7 3
30
d 5
Free distance of a conv. code

T ( D, L)  D L  2 D L  4 D L  ...   2 d 5 D d Ld  4
5 6 2 7 3

d 5

 T (D,L) represents all possible transmitted


sequences that terminate with c-e
transition
 For any d 5, there are 2d -5 paths with
weight w(X)=d that terminate with c-e
transition, those paths are generated by
messages containing d -4 nonzero bits
 The free distance is the smallest of w(X),
so dfree=5
31
Systematic conv. code
 The message elements appear explicitly in the
output sequence together with the redundant
elements

Path 1

Input
Output

Path 2
32
Systematic conv. code

 Impulse response of path 1 is (1,0,0)


 The corresponding generator polynomial is
g (1) ( D)  D 2

 Impulse response of path 2 is (1,0,1)


 The corresponding generator polynomial is
g ( 2) ( D)  D 2  1

 Message sequence (11001)

33
Systematic conv. code

 Output sequence of path 1 (1100100)


 Output sequence of path 2 (1111101)
 m= (11001)
 c(1)=(1100100)
 c(2)=(1111101)
 Encoded sequence
c=(11,11,01,01,11,00,01)
34
Systematic conv. code

 Another example of systemmatic conv.


code
Path 1

Input
Output

Path 2
35
Systematic vs nonsystematic

T ( D, L)  D L  2 D L  4 D L  ...   2 d 5 D d Ld  4
5 6 2 7 3

d 5

 Assumption: T(D,L) is convergent


 When T(D,L) is nonconvergent, an finite number
of transmission errors can cause an infinite
number of decoding errors
 The code is called catastrophic code
 Systematic conv. code cannot be catastrophic
 But, for the same constraint length, free distance
of systematic code is smaller than that of
nonsystematic code
 Table 10.8 36
Systematic vs nonsystematic
 Maximum free distance with systematic and
nonsystematic conv codes of rate 1/2

K Systematic Nonsystematic
2 3 3
3 4 5
4 4 6
5 5 7
6 6 8
7 6 10
8 7 10

37

You might also like