DC Error Correcting Codes
DC Error Correcting Codes
Correcting Codes
Timothy J. Schulz
Professor and Chair
Engineering Exploration
Fall, 2004
Digital Data
• ASCII Text
A 01000001
B 01000010
C 01000011
D 01000100
E 01000101
F 01000110
. .
. .
. .
Digital Sampling
000010001000001011011011010000111110111111111
011
010
001
000
111
110
101
100
Digital Communication
Digital Channels
Binary Symmetric Channel
1-p
0 0
1 1
1-p
Error probability: p
encode book
information bits channel bits
0 000
1 111
decode book
channel bits information bits
000 0
001 0
010 0
011 1
100 0
101 1
110 1
111 1
information bits 0 0 1 0 1
channel code 000 000 111 000 111
received bits 010 000 100 001 110
decoded bits 0 0 0 0 1
ccc
situation
no errors
probability
(1-p)(1-p)(1-p)
= 1-3p+3p -p 2 3 0.45
0.1
0.05
0
0 0.1 0.2 0.3 0.4 0.5
channel error probability
g 0 g 00 g 01 g 0,n1
G
g k -1 g k 1,0 g k 1,1 g k 1,n1
• v = u·G
n-k k
check bits information bits
1 0 0
0 1 0
0 0 1
v1 v2 v3 v4 v5 v6 v7 1 1 0 0
0 1 1
information 1 1 1
1 0 1
v1+v4 v6 v7 0 v1=v4 v6 v7
v2+v4 v5 v6 0 v2=v4 v5 v6
v3+v5 v6 v7 0 v3=v5 v6 v7
v1 0 v2 v3 v 2k
e2 e2 + v2 e 2 + v3 e 2 v 2k
e3 e3 + v 2 e3 + v3 e3 v 2k
e 2n - k e 2n -k v 2 e 2n - k v3 e 2n -k v 2k
• TH 3.3
No two n-tuples in the same row are identical.
Every n-tuple appears in one and only one row.
• TH 3.4
Every (n,k) linear code is capable of correcting exactly 2n-k
error patterns, including the all-zero error pattern.
• EX: The (7,4) Hamming code
# of correctable error patterns = 23 = 8
# of single-error patterns = 7
Therefore, all single-error patterns, and only single-error
patterns can be corrected. (Recall the Hamming Bound, and
the fact that Hamming codes are perfect.
• Can correct all single errors and one double error pattern
000000 110001 101010 011011 011100 101101 110110 000111
000001 110000 101011 011010 011101 101100 110111 000110
000010 110011 101000 011001 011110 101111 110100 000101
000100 110101 101110 011111 011000 101001 110010 000011
001000 111001 100010 010011 010100 100101 111110 001111
010000 100001 111010 001011 001100 111101 100110 010111
100000 010001 001010 111011 111100 001101 010110 100111
100100 010101 001110 111111 111000 001001 010010 100011
• TH 3.6
All the 2k n-tuples of a coset have the same syndrome. The syndromes of
different cosets are different.
(el + vi )HT = elHT (1st Part)
Let ej and el be leaders of two cosets, j<l. Assume they have the same
syndrome.
ejHT = elHT (ej +el)HT = 0.
This implies ej +el = vi, or el = ej +vi
This means that el is in the jth coset. Contradiction.
Decoding Procedure:
1. For the received vector r, compute the syndrome s = rHT.
2. Using the table, identify the coset leader (error pattern) el .
3. Add el to r to recover the transmitted codeword v.
• EX:
r = 1110101 ==> s = 001 ==> e = 0010000
Then, v = 1100101
• Syndrome decoding reduces storage memory from nx2n to
2n-k(2n-k). Also, It reduces the searching time considerably.
• Let r = r0 r1 r2 r3 r4 r5 r6 and s = s0 s1 s2
• From the H matrix:
s0 = r0 + r3 + r 5 + r 6
s1 = r1 + r3 + r 4 + r 5
s2 = r2 + r4 + r 5 + r 6
• From the table of syndromes and their corresponding
correctable error patterns, a truth table can be constructed.
A combinational logic circuit with s0 , s1 , s2 as input and
e0 , e1 , e2 , e3 , e4 , e5 , e6 as outputs can be designed.
i d min
(1 p ) n i
Pu 2 n k B (1 2 p ) (1 p) n
t (d min 1) / 2
• So decode (0 1 1 1) as
(0 1 1 1) – (0 0 2 0) = (0 1 2 1).
Applications
• Data compression.
• Turbo Codes
• The Hat Game
Communication Systems
Zhu Han
Department of Electrical and Computer Engineering
Class 25
– CRC Code
– BCH Code
– RS Code
tx
Error correction code
rx
Transmission vector x
Received vector r
and error vector e
Parity check matrix H
Similar structure as multiplier for encoder
k-1)).
– A fraction of error burst of length greater than n-k+1; the fraction
is 1-2^(-(n-k)).
Powerful error detection; more computation complexity
compared to Internet checksum
Page 652
camera rate:
100,000 bits/second
transmission speed:
16,000 bits/second
multi-access communication
economically( feature of multi-
access)
Very flexible circuit installment , can
disperse over-centralized traffic at
any time.
One channel can be used in different
directions or areas (multi-access
connecting).
(iii) The binary linear code {0000, 1001, 0110, 1111} is not a cyclic, but it is
equivalent to a cyclic code.
(iv) Is Hamming code Ham(2, 3) with the generator matrix
1 0 1 1
0 1 1 2
(a) cyclic?
(b) equivalent to a cyclic code?
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 106
IV054 FREQUENCY of CYCLIC CODES
•Comparing with linear codes, the cyclic codes are quite scarce. For, example there are 11 811
linear (7,3) linear binary codes, but only two of them are cyclic.
•Trivial cyclic codes. For any field F and any integer n >= 3 there are always the following cyclic
codes of length n over F:
•For some cases, for example for n = 19 and F = GF(2), the above four trivial cyclic codes are
the only cyclic codes.
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 107
IV054 EXAMPLE of a CYCLIC CODE
•The code with the generator matrix
1 0 1 1 1 0 0
•has codewords
G 0 1 0 1 1 1 0
0 0 1 0 1 1 1
• c1 = 1011100 c2 = 0101110 c3 =0010111
• c1 + c2 + c3 = 1100101
•and it is cyclic because the right shifts have the following impacts
• c1 c2, c2 c3, c 3 c 1 + c3
• c1 + c2 c2 + c3, c1 + c3 c1 + c2 + c3, c 2 + c3 c1
• c 1 + c2 + c 3 c 1 + c 2
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 108
IV054
POLYNOMIALS over GF(q)
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 109
IV054 RING of POLYNOMIALS
•The set of polynomials in Fq[x] of degree less than deg (f(x)), with addition and multiplication modulo f(x) forms a ring
denoted Fq[x]/f(x).
Definition A polynomial f(x) in Fq[x] is said to be reducible if f(x) = a(x)b(x), where a(x), b(x) Fq[x] and
deg (a(x)) < deg (f(x)), deg (b(x)) < deg (f(x)).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 110
IV054
FIELD Rn, Rn = Fq[x] / (xn - 1)
•Computation modulo xn – 1
•Since xn 1 (mod xn -1) we can compute f(x) mod xn -1 as follow:
•In f(x) replace xn by 1, xn +1 by x, xn +2 by x2, xn +3 by x3, …
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 111
IV054 Algebraic characterization of cyclic codes
• Theorem A code C is cyclic if C satisfies two conditions
• (i) a(x), b(x) C a(x) + b(x) C
• (ii) a(x) C, r(x) Rn r(x)a(x) C
• Proof
• (1) Let C be a cyclic code. C is linear (i) holds.
• (ii) Let a(x) C, r(x) = r0 + r1x + … + rn -1xn -1
• r(x)a(x) = r0a(x) + r1xa(x) + … + rn -1xn -1a(x)
• is in C by (i) because summands are cyclic shifts of a(x).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 112
IV054 CONSTRUCTION of CYCLIC CODES
•Notation If f(x) Rn, then
f(x) = {r(x)f(x) | r(x) Rn}
•Theorem For any f(x) Rn, the setf(x) is a cyclic code (generated by f).
Example C = 1 + x2 , n = 3, q = 2.
We have to compute r(x)(1 + x2) for all r(x) R3.
R3 = {0, 1, x, 1 + x, x2, 1 + x2, x + x2, 1 + x + x2}.
GENERATOR POLYNOMIALS
Definition If for a cyclic code C it holds
C = g(x),
then g is called the generator polynomial for the code C.
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 115
IV054 HOW TO DESIGN CYCLIC CODES?
•The last claim of the previous theorem gives a recipe to get all cyclic codes of given length n.
•Indeed, all we need to do is to find all factors of
• xn -1.
•Problem: Find all binary cyclic codes of length 3.
•Solution: Since
• x3 – 1 = (x + 1)(x2 + x + 1)
• both factors are irreducible in GF(2)
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 116
IV054 Design of generator matrices for cyclic codes
• Theorem Suppose C is a cyclic code of codewords of length n with the generator polynomial
• g(x) = g0 + g1x + … + grxr.
• Then dim (C) = n - r and a generator matrix G1 for C is
g 0 g1 g 2 ... gr 0 0 0 ... 0
0 g 0 g1 g 2 ... gr 0 0 ... 0
G1 0 0 g 0 g1 g2 ... gr 0 ... 0
.. .. ..
Proof 0 0 ... 0 0 ... 0 g0 ... g r
(i) All rows of G1 are linearly independent.
(ii) The n - r rows of G represent codewords
g(x), xg(x), x2g(x),…, xn -r -1g(x)
(*)
(iii) It remains to show that every codeword in C can be expressed as a linear combination of
vectors from (*).
Inded, if a(x) C, then
a(x) = q(x)g(x).
Since deg a(x) < n we have deg q(x) < n - r.
Hence
q(x)g(x) = (q0 + q1x + … + qn -r -1xn -r -1)g(x)
= q0g(x) + q1xg(x) + … + qn -r -1xn -r -1g(x).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 117
IV054 EXAMPLE
•The task is to determine all ternary codes of length 4 and generators for them.
•Factorization of x4 - 1 over GF(3) has the form
• x4 - 1 = (x - 1)(x3 + x2 + x + 1) = (x - 1)(x + 1)(x2 + 1)
•Therefore there are 23 = 8 divisors of x4 - 1 and each generates a cyclic code.
• Generator polynomial Generator matrix
• 1 I4
• x 1 1 0 0
0 1 1 0
0 0 1 1
• x+1 1 1 0 0
0 1 1 0
0 0 1 1
• x2 + 1
1 0 1 0
0 1 0 1
• (x - 1)(x + 1) = x2 - 1
1 0 1 0
0 1 0 1
• (x - 1)(x2 + 1) = x3 - x2 + x - 1 [ -1 1 -1 1 ]
• (x + 1)(x2 + 1) [1111]
• x4 - 1 = 0 [0000]
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 118
IV054
Check polynomials and parity check matrices for cyclic codes
•Let C be a cyclic [n,k]-code with the generator polynomial g(x) (of degree n - k). By the last theorem
g(x) is a factor of xn - 1. Hence
• xn - 1 = g(x)h(x)
•for some h(x) of degree k (where h(x) is called the check polynomial of C).
•Theorem Let C be a cyclic code in Rn with a generator polynomial g(x) and a check polynomial h(x).
Then an c(x) Rn is a codeword of C if c(x)h(x) 0 - this and next congruences are modulo xn - 1.
h x hk hk 1 x ... h0 x k
•i.e. the reciprocal polynomial of h(x).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 120
IV054 POLYNOMIAL REPRESENTATION of DUAL CODES
•Such an encoding can be realized by the shift register shown in Figure below, where
input is the k-bit message to be encoded followed by n - k 0' and the output will be the
encoded message.
•Another method for encoding of cyclic codes is based on the following (so called
systematic) representation of the generator and parity-check matrices for cyclic codes.
•Theorem Let C be an (n,k)-code with generator polynomial g(x) and r = n - k. For i = 0,1,
…,k - 1, let G2,i be the length n vector whose polynomial is G2,i(x) = x r+I -x r+I mod g(x). Then
the k * n matrix G2 with row vectors G2,I is a generator matrix for C.
•Moreover, if H2,J is the length n vector corresponding to polynomial H2,J(x) = xj mod g(x),
then the r * n matrix H2 with row vectors H2,J is a parity check matrix for C. If the message
vector m is encoded by
• m c = mG2,
•On this basis one can construct the following shift-register encoder for the case of a
systematic representation of the generator for a cyclic code:
•Shift-register encoder for systematic representation of cyclic codes. Switch A is closed for
first k ticks and closed for last r ticks; switch B is down for first k ticks and up for last r ticks.
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 123
IV054 Hamming codes as cyclic codes
•Definition (Again!) Let r be a positive integer and let H be an r * (2r -1)
matrix whose columns are distinct non-zero vectors of V(r,2). Then the
code having H as its parity-check matrix is called binary Hamming
code denoted by Ham (r,2).
•It can be shown that binary Hamming codes are equivalent to cyclic
codes.
Theorem The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
Theorem If p(x) is a primitive polynomial over GF(2) of degree r, then the cyclic code
p(x) is the code Ham (r,2).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 124
IV054
Hamming codes as cyclic codes
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 125
IV054
PROOF of THEOREM
•The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
•It is known from algebra that if p(x) is an irreducible polynomial of degree r, then the ring F2[x] / p(x) is a field of
order 2r.
•In addition, every finite field has a primitive element. Therefore, there exists an element of F2[x] / p(x) such that
• F2[x] / p(x) = {0, 1, , 2,…, 2r –2}.
•Let us identify an element a0 + a1 + … ar -1xr -1 of F2[x] / p(x) with the column vector
• (a0, a1,…, ar -1)T
•Let now C be the binary linear code having H as a parity check matrix.
•Since the columns of H are all distinct non-zero vectors of V(r,2), C = Ham (r,2).
•Putting n = 2r -1 we get
• C = {f0 f1 … fn -1 V(n, 2) | f0 + f1 + … + fn -1 n –1 = 0 (2)
• = {f(x) Rn | f() = 0 in F2[x] / p(x)} (3)
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 126
IV054 BCH codes and Reed-Solomon codes
•To the most important cyclic codes for applications belong BCH codes and Reed-Solomon codes.
1
BHC stands for Bose and Ray-Chaudhuri and Hocquenghem who discovered
these codes. Cyclic codes
EE576 Dr. Kousa Linear Block Codes 127
IV054
CONVOLUTION CODES
• Very often it is important to encode an infinite stream or several streams of data – say
bits.
• Convolution codes, with simple encoding and decoding, are quite a simple
• generalization of linear codes and have encodings as cyclic codes.
• For example,
G1 [ x 2 1, x 2 x 1]
• is the generator matrix for a (2,1) convolution code CC 1 and
1 x 0 x 1
G2
0
1 x
• is the generator matrix for a (3,2) convolution code CC 2
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 128
IV054 ENCODING of FINITE POLYNOMIALS
• I=(I0(x), I1(X),…,Ik-1(x))
• C=(C0(x), C1(x),…,Cn-1(x))
• As follows
• C= I . G
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 129
EXAMPLES
• EXAMPLE 1
• EXAMPLE 2
1 0 x 1
( x x, x 1).G2 ( x x, x 1).
2 3 2
3
01 x
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 130
IV054
ENCODING of INFINITE INPUT STREAMS
• The way infinite streams are encoded using convolution codes will be
• Illustrated on the code CC1.
• An input stream I = (I0, I1, I2,…) is mapped into the output stream
• C= (C00, C10, C01, C11…) defined by
• The first multiplication can be done by the first shift register from the next
• figure; second multiplication can be performed by the second shift register
• on the next slide and it holds
• C0i = Ii + Ii+2, C1i = Ii + Ii-1 + Ii-2.
• That is the output streams C0 and C1 are obtained by convolving the input
• stream with polynomials of G1’
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 131
IV054 ENCODING
The first shift register output
input
1 x x2
will multiply the input stream by x2+1 and the second shift register
output
input
1 x x2
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 132
IV054 ENCODING and DECODING
C00,C01,C02
I 1 x x2 Output streams
C10,C11,C12
Viterbi algorithm
Is used.
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 133
Cyclic Linear Codes
Rong-Jaye Chen
K [x ] {a 0 a1x a 2x 2 a 3x 3 .... an x n }
a 0 ,...., an K , deg(f (x )) n
– 2. Eg 4.1.1
Let f ( x ) 1 x x 3 x 4 g ( x ) x x 2 x 3 h (x ) 1 x 2 x 4 then
(a ) f (x ) g (x ) 1 x 2 x 4
(b ) f (x ) h (x ) x x 2 x 3
(c ) f (x )g (x ) (x x 2 x 3 ) x (x x 2 x 3 ) x 3 (x x 2 x 3 )
x 4 (x x 2 x 3 ) x x 7
f ( x ) q ( x )h ( x ) r ( x ),
– f ( x ) a a x a x 2 .... a x n 1 over K
0 1 2 n 1
– 6. E.g 4.1.12
– 8.Eg 4.1.15
f (x ) 1 x 4 x 9 x 11 , h (x ) 1 x 5 , p (x ) 1 x 6
f ( x ) mod h ( x ) r ( x ) 1 x p ( x ) mod h ( x )
=>f(x) and p(x) are equivalent mod h(x)!!
– 9. Eg 4.1.16
f (x ) 1 x 2 x 6 x 9 x 11 , h (x ) 1 x 2 x 5 , p (x ) x 2 x 8
f ( x )mod h ( x ) x x 4 , p ( x )mod h ( x ) 1 x 3
=>f(x) and p(x) are NOT equivalent mod h(x)!!
f ( x ) 1 x x 7 , g (x ) 1 x x 2 , h ( x ) 1 x 5 , p (x ) 1 x 6
so f ( x ) g (x )(mod h (x )), then
f (x ) p (x ) and g ( x ) p ( x ) :
((1 x x 7 ) (1 x 6 )) mod h (x ) x 2 ((1 x x 2 ) (1 x 6 ))mod h ( x )
f (x ) p (x ) and g ( x ) p ( x ) :
((1 x x 7 )(1 x 6 )) mod h ( x ) 1 x 3 ((1 x x 2 )(1 x 6 ))mod h ( x )
EE576 Dr. Kousa Linear Block Codes 140
Cyclic Linear Codes
(v )
• Lemma 4.2.3 (v w ) (v ) (w ),
and (av ) a (v ), a K {0,1}
Thus to show a linear code C is cyclic
it is enough to show that (v ) C
for each word v in a basis for C
– 6. Theorem 4.2.13
C: a cyclic code of length n,
g(x): the generator polynomial, which is the unique nonzero
polynomial of minimum degree in C.
degree(g(x)) : n-k,
• 1. C has dimension k
• 2. g(x), xg(x), x2g(x), …., xk-1g(x) are a basis for C
• 3. If c(x) in C, c(x)=a(x)g(x) for some polynomial a(x)
with degree(a(x))<k
– 8. Theorem 4.2.17
g(x) is the generator polynomial for a linear cyclic code of length n if only if g(x) divides 1+x n
(so 1+xn =g(x)h(x)).
– 9. Corollary 4.2.18
The generator polynomial g(x) for the smallest cyclic code of length n containing
the word v(polynomial v(x)) is g(x)=gcd(v(x), 1+x n)
– 10. Eg 4.2.19
n=8, v=11011000 so v(x)=1+x+x3+x4
g(x)=gcd(1+x+x3+x4 , 1+x8)=1+x2
Thus g(x)=1+x2 is the smallest cyclic linear code containing
v(x), which has dimension of 6.
EE576 Dr. Kousa Linear Block Codes 145
Cyclic Linear Codes
g (x )
xg ( x )
G , n: length of codes, k=n-deg(g(x))
:
k-1
x g (x )
2. Eg 4.3.2
• C: the linear cyclic codes of length n=7 with generator polynomial
g(x)=1+x+x3, and deg(g(x))=3, => k = 4
g (x ) 1 x x 3 1101000
xg (x ) x x 2 x 4 0110100
G=
x 2g (x ) x 2 x 3 x 5 0011010
x 3g (x ) x 3 x 4 x 6 0001101
s 2r
if n=2 s then 1+x (1 x )
r n
– 3. Coro 4.4.4
Let n 2 r s, where s is odd and let 1 x s be
the product of z irreducible polynomials.
Then there are (2 r 1) z 2 proper linear
cyclic codes of length n.
– 5. Eg 4.4.12
For n=7,
C 0 {0}, so c 0 ( x ) x 0 1
C1 {1, 2, 4} = C 2 C 4 , so c1 ( x ) x 1 x 2 x 4
C 3 {3, 5, 6} = C 5 = C 7 , so c 2 ( x ) x 3 x 6 x 5
: :
EE576 Dr. Kousa Linear Block Codes 155
Cyclic Linear Codes
• [5].Dual cyclic codes
– 1. The dual code of a cyclic code is also cyclic
– 2. Lemma 4.5.1
a > a(x), b > b(x) and b’ > b’(x)=xnb(x-1) mod 1+xn
then
a(x)b(x) mod 1+xn = 0 iff πk(a) . b’=0
for k=0,1,…n-1
– 3. Theorem 4.5.2
C: a linear code, length n, dimension k with generator g(x)
If 1+xn = g(x)h(x) then
C⊥: a linear code , dimension n-k with generator x kh(x-1)
– 5. Eg. 4.5.4
g(x)=1+x+x2, n=6, k=6-2=4
h(x)=1+x+x3+x4
h(x)generator for C⊥ is g⊥ (x)=x4h(x-1)=1+x+x3+x4
Lecture 8
EE576 Dr. Kousa Linear Block Codes 159
Today, we are going to talk about:
• Channel coding
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 160
What is channel coding?
• Channel coding:
– Transforming signals to improve communications
performance by increasing the robustness against
channel impairments (noise, interference, fading, ..)
– Waveform coding: Transforming waveforms to
better waveforms
– Structured sequences: Transforming data
sequences into better sequences, having structured
redundancy.
• “Better” in the sense of making the decision
process less subject to errors.
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 161
Error control techniques
• Automatic Repeat reQuest (ARQ)
– Full-duplex connection, error detection codes
– The receiver sends a feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
– The transmitter retransmits the previously sent
packet if it receives NACK.
• Forward Error Correction (FEC)
– Simplex connection, error correction codes
– The receiver tries to correct some errors
• Hybrid ARQ (ARQ+FEC)
– Full-duplex, error detection and correction codes
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 162
Why using error correction coding?
– Error performance vs. bandwidth
– Power vs. bandwidth
– Data rate vs. bandwidth PB
A
F
Coding gain:
For a given bit-error probability, C B
the reduction in the Eb/N0 that can be
realized through the use of code: D
E
Eb Eb Uncoded
G [dB]
[dB] [dB]
N0 u N 0 c Eb / N 0 (dB)
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 163
Channel models
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 164
Linear block codes
Some definitions – cont’d
• Binary field :
– The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
Addition Multiplication
00 0 00 0
0 1 1 0 1 0
1 0 1 1 0 0
11 0 1 1 1
– Binary field is also called Galois field, GF(2).
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 165
Some definitions – cont’d
• Fields :
– Let F be a set of objects on which two operations ‘+’ and ‘.’ are
defined.
– F is said to be a field if and only if
1. F forms a commutative group under + operation. The
additive identity element is labeled “0”.
a, b F a b b a F
2. F-{0} forms a commutative group under . Operation. The
multiplicative identity element is labeled “1”.
a, b F a b b a F
3. The operations “+” and “.” distribute:
a (b c) (a b) (a c)
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 166
Some definitions – cont’d
• Vector space:
– Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
1. Commutative: u, v V u v v u F
2. a F , v V a v u V
3. Distributive:
(a b) v a v b v and a (u v) a u a v
4. Associative: a, b F , v V (a b) v a (b v )
5. v V, 1 v v
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 167
Some definitions – cont’d
– Examples of vector spaces
• The set of binary n-tuples, denoted by Vn
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 168
Some definitions – cont’d
• Spanning set:
– A collection of vectors G v1 , v 2 , , v n,
the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or
to span V.
• Example:
(1000), (0110), (1100), (0011), (1001) spans V4 .
• Bases:
– A spanning set for V that has minimal cardinality is
called a basis for V.
• Cardinality of a set is the number of objects in the set.
• Example:
(1000), (0100), (0010), (0001) is a basis for V4 .
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 169
Linear block codes
• Linear block code (n,k)
– A set C Vnwith cardinality 2 k is called a linear block
code if, and only if, it is a subspace of the vector Vn
space .
Vk C Vn
• Members of C are called code-words.
• The all-zero codeword is a codeword.
• Any linear combination of code-words is a
codeword.
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 170
Linear block codes – cont’d
mapping Vn
Vk
C
Bases of C
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 171
Linear block codes – cont’d
• The information bit stream is chopped into blocks of k bits.
• Each block is encoded to a larger block of n bits.
• The coded bits are modulated and sent over channel.
• The reverse procedure is done at the receiver.
Channel
Data block Codeword
encoder
k bits n bits
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 173
Linear block codes – cont’d
• Error detection capability is given by
ed 1
min which is defined as the
• Error correcting-capability t of a code,
maximum number of guaranteed correctable errors per codeword, is
d min 1
t
2
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 174
Linear block codes – cont’d
• For memory less channels, the probability that the
decoder commits an erroneous decoding is
1 n n j
PB j p (1 p) n j
n j t 1 j
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 175
Linear block codes – cont’d
• Discrete, memoryless, symmetric channel model
1 1-p 1
p
Tx. bits Rx. bits
p
0 1-p 0
– Note that for coded systems, the coded bits are
modulated and transmitted over channel. For example,
for M-PSK modulation on AWGN channels (M>2):
2 2 log 2 M Ec 2 2 log 2 M Eb Rc
p
Q sin Q sin
log 2where
M isN 0energy per M coded
log 2 M
bit, given by N0 M
Ec Ec Rc Eb
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 176
Linear block codes –cont’d
mapping Vn
Vk
C
Bases of C
U mG V1
V
(u1 , u2 , , un ) (m1 , m2 , , mk ) 2
Vk
(u1 , u2 , , un ) m1 V1 m2 V2 m2 Vk
– The rows of G, are linearly independent.
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 178
Linear block codes – cont’d
• Example: Block code (6,3)
Message vector Codeword
000 000000
V1 1 1 0 1 0 0 100 110100
G V2 0 1 1 0 1 0 010 011010
V3 1 0 1 0 0 1 110 1 011 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
1 11 0 0 0 111
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 179
Linear block codes – cont’d
• Systematic block code (n,k)
– For a systematic code, the first (or last) k elements in
the codeword are information bits.
G [P I k ]
I k k k identity matrix
Pk k (n k ) matrix
U (u1 , u2 ,..., un ) ( p1 , p2 ,..., pn k , m1 , m2 ,..., mk )
parity bits message bits
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 180
Linear block codes – cont’d
• For any linear code we can find an G
matrix H ( n k )n , which its rows are
orthogonal to rows of :
GH 0 T
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 181
Linear block codes – cont’d
Data source Format
m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
decoding r Detection
m̂
r Ue
r (r1 , r2 ,...., rn ) received codeword or vector
e (e1 , e2 ,...., en ) error pattern or vector
• Syndrome testing:
– S is syndrome of r, corresponding to the error
pattern e. S rH T eH T
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 182
Linear block codes – cont’d
• Standard array
1. For row i 2,3,...,2 n, k find a vector in of
minimum weight which is not already Vn listed in the
array.
2. Call this patterne i and form the i : th row as the
corresponding coset
zero
codeword U1 U2 U 2k
e2 e2 U2 e 2 U 2k
coset
e 2 n k e 2 n k U 2 e 2 n k U 2 k
coset leaders
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 183
Linear block codes – cont’d
• Standard array and syndrome table decoding
1. Calculate S rHT
ˆ r leader,
U
2. Find the coset eˆ eˆ ei , corresponding to S.
3. Calculate and corresponding m̂.
– Note that U ˆ r eˆ (U e) eˆ U (e eˆ )
• If eˆ e, error is corrected.
• If ˆ , undetectable decoding error occurs.
ee
Coset leaders
U (i )
(un i , un i 1 ,..., un 1 , u0 , u1 , u2 ,..., un i 1 )
U (1101)
U (1) (1110 ) U ( 2) (0111) U (3) (1011) U ( 4) (1101) U
U (1) ( X ) un 1 ( X n 1)
U (1) ( X ) XU( X ) modulo ( X n 1)
– Hence:
By extension
U ( i ) ( X ) X i U( X ) modulo ( X n 1)
3. Add p ( X ) to X nk
m( X ) to form the codewordU( X )
1 1 0 1 0 0 0
0 1 0 0 1 0 1 1
1 1 0 1 0 0
G H 0 1 0 1 1 1 0
1 1 1 0 0 1 0
0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44
PB
8PSK
QPSK
Eb / N 0 [dB]
EE576 Dr. Kousa Linear Block Codes 198
ADVANTAGE of GENERATOR MATRIX:
1 1 0 1 0 0
U
m , m , m 0 1 1 0 1 0
1 2 3
1 0 1 0 0 1
P I3
u1 , u2 , u3, u 4, u 5, u6
0
0 1
H
T
p p p
P
1 ,( n k )
11 12
pp p
21 22 2 ,( n k )
p
p p
k1 k2 k ,( n k )
= [ 1, 1+1, 1+1 ] = [ 1 0 0]
(syndrome of corrupted code vector)
Now we can verify that syndrome of the corrupted code vector is the same as
the syndrome of the error pattern:
S = eHT = [1 0 0 0 0]HT = [ 1 0 0 ]
( =syndrome of error pattern )
EE576 Dr. Kousa Linear Block Codes 210
Error Correction
Since there is a one-to-one correspondence between correctable
error patterns and syndromes we can correct such error patterns.
Assume the 2n n-tuples that represent possible received vectors
are arranged in an array called the standard array.
1. The first row contains all the code vectors starting with all-
zeros
vector
2. First column contains all the correctable error patterns
The standard
U array
U for a (n,k) code
U is:
U
1 2 i 2k
e U e U e U e
2 2 2 i 2 2k 2
e U e
j i j
e U e U e
2nk 2 2nk 2k 2nk
EE576 Dr. Kousa Linear Block Codes 211
Each row called a coset consists of an error pattern in the first
column, also known as the coset leader, followed by the code
vectors perturbed by that error pattern.
2n
there are k 2nk cosets
2
All members of a coset have the same syndrome and in fact the
syndrome is used to estimate the error pattern.
Dr. I. J. Wassell
Error Demod
+
Digital Source Line
Control (Receive
Sink Decode
Decoding
Decoding
Filter,
Y()
r
etc)
Receiver
EE576 Dr. Kousa Linear Block Codes 222
Error Models
• Binary Symmetric Memoryless Channel
– Assumes transmitted symbols are binary
– Errors affect ‘0’s and ‘1’s with equal probability
(i.e., symmetric)
– Errors occur randomly and are independent from
bit to bit (memoryless)
1-p
0 0 p is the probability
p
of bit error or the Bit
IN OUT Error Rate (BER) of
p
1 1 the channel
1-p
EE576 Dr. Kousa Linear Block Codes 223
Error Models
• Many other types
• Burst errors, i.e., contiguous bursts of bit errors
– output from DFE (error propagation)
– common in radio channels
– Insertion, deletion and transposition errors
• We will consider mainly random errors
Error Control Techniques
• Error detection in a block of data
– Can then request a retransmission, known as automatic repeat request
(ARQ) for sensitive data
– Appropriate for
• Low delay channels
• Channels with a return path
– Not appropriate for delay sensitive data, e.g., real time speech and data
• Forward Error Correction (FEC)
– Coding designed so that errors can be corrected at the receiver
– Appropriate for delay sensitive and one-way transmission (e.g., broadcast
TV) of data
– Two main types, namely block codes and convolutional codes. We will only
look at block codes
EE576 Dr. Kousa Linear Block Codes 224
•
Block Codes
We will consider only binary data
• Data is grouped into blocks of length k bits (dataword)
• Each dataword is coded into blocks of length n bits (codeword), where
in general n>k
• This is known as an (n,k) block code
• A vector notation is used for the datawords and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is quantified by the code rate,
– Code rate = k/n
– i.e., the higher the redundancy, the lower the code rate
Codeword +
Datawor possible
d (k bits) errors (n bits)
Chann Chann
el el
decode
Error flags r
• Decoder gives corrected data
• May also give error flags to
– Indicate reliability of decoded data
– Helps with schemes employing multiple layers of error correction
EE576 Dr. Kousa Linear Block Codes 226
Parity Codes
• Example of a simple block code – Single Parity Check Code
– In this case, n = k+1, i.e., the codeword is the dataword with
one additional bit
– For ‘even’ parity the additional bit is,
q i 1 d i (mod 2)
k
Datawor Codewor
0d 0 0 0d 0 0 0
0 0 1 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0 1 1 0
1 0 0 1 0 0 1
1 0 1 1 0 1 0
1 1 0 1 1 0 0
1 1 1 1 1 1 1
X is a valid codeword
O is an invalid codeword
d min 1
• That is the maximum number of correctable
errors is given by,
d min 1
t
2
where dmin is the minimum Hamming distance
between 2 codewords and . means the
smallest integer
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
• So, to obtain the codeword for dataword 1011, the first, third and fourth
codewords in the list are added together, giving 1011010
• This process will now be described in more detail
k
c 0 ai 0
i 1
1 0 1 1 a1 = [1011]
G
0 1 0 1 a2 = [0101]
• For d = [1 1], then;
1 0 1 1
0 1 0 1
c
_ _ _ _
1 1 1 0
• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a
plane) spanned by a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is
orthogonal to the plane containing the rows of the G matrix, i.e., a1 and
a2
• Any received codeword which is not in the plane containing a1 and a2
(i.e., an invalid codeword) will thus have a component in the direction of
b1 yielding a non- zero dot product between itself and b1
c1
a2
c2
a1
c3
b1
EE576 Dr. Kousa Linear Block Codes 247
Error Syndrome
• For error correcting codes we need a method to compute the
required correction
• To do this we use the Error Syndrome, s of a received
codeword, cr
s = crHT
• If cr is corrupted by the addition of an error vector, e, then
cr = c + e
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error
1 0 0 0 0 1 1
0 0 1 1 1 1 0 0
1
G I | P
1 0 0 1 0
H - P T | I 1 0 1 1 0 1 0
0 0 1 0 1 1 0
1 1 0 1 0 0 1
0 0 0 1 1 1 1
0 1 1
1 0 1
1 1 0
s c r H T 1 1 0 1 0 0 0 1 1 1 0 0 1
1 0 0
0 1 0
0 0 1
• In this case a syndrome 001 indicates an error in bit 1 of the
codeword
Standard Array
• From the standard array we can find the most likely transmitted
codeword given a particular received codeword without having
to have a look-up table at the decoder containing all possible
codewords in the standard array
• Not surprisingly it makes use of syndromes
c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1 All patterns in
e2 c2+e2 …… cM+e2 s2 row have same
e3 c2+e3 …… cM+e3 s3 syndrome
… …… …… …… … Different rows
eN c2+eN …… cM+eN sN have distinct
syndromes
s e
Compute Look-up
syndrome table
cr c
+
1 0 0 0 0 1 1
0 0 1 1 1 1 0 0
1 0 0 1 0 1
G I | P
0 0 1 0 1 1 0
H - P T | I 1 0 1 1 0 1 0
1 1 0 1 0 0 1
0 0 0 1 1 1 1
e = 0010000
cr= 0100011
• Only known perfect codes are SEC Hamming codes and TEC Golay
(23,12) code (dmin=7). Using previous equation yields
Encoding
Hard-Decoding Soft-Decoding
Example GF(2):
+ 0 1 0 1
addition 0 0 1 multiplication 0 0 0
1 1 0 1 0 1
XOR AND
C XG
• Dual (n, n - k) code
Codeword polynomial C p X p g p
where g(p): generator polynomial of degree nk
p 1 g p h p
n
s 2c 1
2006/07/07 Wireless Communication Engineering I 287
Soft-Decoding & Maximum Likelihood
( k)
r = s +n
(
= r1 , , rn)
= ( s ,, s ) + ( n ,, n )
1 n 1 n
k
Prob r s : Likelihood
Max Prob r s
k
Min r s
k
k
k 2
j 1
Ci : i - th codeword
wher c : j - th position bit of the i - th codeword
ij
e
rj : j - th received signal
→ Largest matched filter output is selected.
dmin ↑ → Cg ↑
Discrete-time channel =
modulator + AWGN channel + demodulator
→ BSC with crossover probability
p Q 2 b Rc : coherent PSK
Q b Rc : coherent FSK
1
2 exp 12 b Rc : noncoherent FSK
2006/07/07 Wireless Communication Engineering I 292
Maximum-Likelihood Decoding →
Minimum Distance Decoding
Syndrome Calculation by Parity check matrix H
S YH t
Cm e H t
eH t
where C m : transmitted codeword
Y : received codeword at the demodulator
e : binary error vector
2006/07/07 Wireless Communication Engineering I 293
• Comparison of Performance between Hard-Decision and Soft-Decision
Decoding
d min 1 1 2
1 log 2 d min 1 Rc
n 2d min 2 n
1
b n k
2
Block and Convolution interleave is
effective for burst error.
2006/07/07 Wireless Communication Engineering I 297
Convolution Codes
Performance of convolution code > block code
shown by Viterbi’s Algorithm.
nE ( R )
P (e) z
E(R) : Error Exponent
1
Cn log 2 e
Eb N 0
10.322
10-1 INTRODUCTION
10.323
Figure 10.1 Single-bit error
10.324
Figure 10.2 Burst error of length 8
10.325
Error detection/correction
Error detection
Check if any error has occurred
Don’t care the number of errors
Don’t care the positions of errors
Error correction
Need to know the number of errors
Need to know the positions of errors
More difficult
10.326
Figure 10.3 The structure of encoder and decoder
10.327
Modular Arithmetic
Modulus N: the upper limit
In modulo-N arithmetic, we use only the
integers in the range 0 to N −1, inclusive.
If N is 2, we use only 0 and 1
No carry in the calculation (sum and
subtraction)
10.328
Figure 10.4 XORing of two single bits or two words
10.329
10-2 BLOCK CODING
10.330
Figure 10.5 Datawords and codewords in block coding
10.331
Example 10.1
10.332
Figure 10.6 Process of error detection in block coding
10.333
Table 10.1 A code for error detection (Example 10.2)
10.334
Figure 10.7 Structure of encoder and decoder in error correction
10.335
Table 10.2 A code for error correction (Example 10.3)
10.336
Hamming Distance
The Hamming distance between two
words is the number of differences
between corresponding bits.
The minimum Hamming distance is the
smallest Hamming distance between
all possible pairs in a set of words.
10.337
We can count the number of 1s in the Xoring of two words
10.338
Example 10.5
Solution
We first find all Hamming distances.
Solution
We first find all the Hamming distances.
•The minimum Hamming distance for our first code scheme (Table 10.1) is 2. This code guarantees
detection of only a single error.
•For example, if the third codeword (101) is sent and one error occurs, the received codeword does not
match any valid codeword. If two errors occur, however, the received codeword may match a valid
codeword and the errors are not detected.
10.341
Example 10.8
10.342
Figure 10.8 Geometric concept for finding dmin in error detection
10.343
Figure 10.9 Geometric concept for finding dmin in error correction
10.344
Example 10.9
Solution
This code guarantees the detection of up to three errors
(s = 3), but it can correct up to one error. In other words,
if this code is used for error correction, part of its capability is wasted. Error
correction codes need to have an odd minimum distance (3, 5, 7, . . . ).
10.345
10-3 LINEAR BLOCK CODES
10.346
Example 10.10
10.348
Linear Block Codes
Simple parity-check code
Hamming codes
10.349
Figure 10.10 Encoder and decoder for simple parity-check code
10.350
Example 10.12
10.352
Figure 10.11 Two-dimensional parity-check code
10.353
Figure 10.11 Two-dimensional parity-check code
10.354
Figure 10.11 Two-dimensional parity-check code
10.355
Table 10.4 Hamming code C(7, 4)
10.356
Figure 10.12 The structure of the encoder and decoder for a Hamming code
10.357
Table 10.5 Logical decision made by the correction logic analyzer
r0=a2+a1+a0 S0=b2+b1+b0+q0
r1=a3+a2+a1 S1=b3+b2+b1+q1
r2=a1+a0+a3 S2=b1+b0+b3+q2
10.358
Example 10.13
Let us trace the path of three datawords from the sender to the
destination:
1. The dataword 0100 becomes the codeword 0100011.
The codeword 0100011 is received. The syndrome is
000, the final dataword is 0100.
2. The dataword 0111 becomes the codeword 0111001.
The codeword 0011001 is received. The syndrome is \
011. After flipping b2 (changing the 1 to 0), the final
dataword is 0111.
3. The dataword 1101 becomes the codeword 1101000.
The codeword 0001000 is received. The syndrome is
101. After flipping b0, we get 0000, the wrong dataword.
This shows that our code cannot correct two errors.
10.359
10-4 CYCLIC CODES
10.361
Figure 10.14 CRC encoder and decoder
10.362
Figure 10.15 Division in CRC encoder
10.363
Figure 10.16 Division in the CRC decoder for two cases
10.364
Figure 10.21 A polynomial to represent a binary word
10.365
Figure 10.22 CRC division using polynomials
10.366
10-5 CHECKSUM
10.367
Example 10.18
10.368
Example 10.19
10.369
Example 10.20
Solution
The number 21 in binary is 10101 (it needs five bits). We
can wrap the leftmost bit and add it to the four rightmost
bits. We have (0101 + 1) = 0110 or 6.
10.370
Example 10.21
Solution
In one’s complement arithmetic, the negative or
complement of a number is found by inverting all bits.
Positive 6 is 0110; negative 6 is 1001. If we consider only
unsigned numbers, this is 9. In other words, the
complement of 6 is 9. Another way to find the complement
of a number in one’s complement arithmetic is to subtract
the number from 2n − 1 (16 − 1 in this case).
10.371
Figure 10.24 Example 10.22
1 1 1 1
0 0 0 0
10.372
Note
Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are
added using one’s complement addition.
4. The sum is complemented and becomes the
checksum.
5. The checksum is sent with the data.
10.373
Note
Receiver site:
1. The message (including checksum) is
divided into 16-bit words.
2. All words are added using one’s
complement addition.
3. The sum is complemented and becomes the
new checksum.
4. If the value of checksum is 0, the message
is accepted; otherwise, it is rejected.
10.374
Example 10.23
10.376
Modern Coding Theory: LDPC Codes
Hossein Pishro-Nik
University of Massachusetts Amherst
November 7, 2006
Outline
Sector
Some of the bits may change 010101011110010101001
during the transmission from the
disk to the disk drive
380
Errors in Information Transmission: Cont.
BSC
Information bits Corrupted bits
10010…10101… 10110…00101…
0 1-p 0
Example:
383
Repetition Codes: Cont.
( x1 ) ( y1 , y2 , y3 ) ( x1 , x1 , x1 )
Encoder: Repeat
each bit three times
codeword
BSC
( x1 ) ( z1 , z 2 , z3 )
Decoder: majority
voting
Corrupted codeword
384
Repetition Codes: Cont.
( x1 ) (0) (0,0,0)
Encoder
codeword
BSC
(1,0,0)
(0)
Decoder
pe p 3 3 p 2 (1 p),
p 0.01 pe 3 10-4
386
Error Control Coding: Block Codes
( x1 , x2 ,..., xk ) ( y1 , y 2 ,..., y n )
Encoder
Information block codeword
n>k BSC
( x1 , x2 ,..., xk ) ( z1 , z 2 ,..., z n )
Decoder
Corrupted codeword
Encoding: mapping the information block to the
corresponding codeword
( x1 , x2 ,..., xk ) ( y1 , y 2 ,..., y n )
Encoder
Information block Codeword
BSC
( x1 , x2 ,..., xk ) ( z1 , z 2 ,..., z n )
Decoder
Corrupted codeword
388
Code Rate
Dimension k
R Code rate
Code length n
0 R 1
389
Repetition Codes Revisited
( x1 ) ( y1 , y 2 , y3 ) ( x1 , x1 , x1 )
Encoder
K=1 n=3
BSC
( x1 ) ( z1 , z 2 , z3 )
Decoder
390
Block Codes: Cont.
(0,0)
(0,1)
(1,0)
(1,1)
2 k points 2 n points
392
Good Block Codes
Good Codes:
393
Linear Block Codes
A linear mapping
g11 g12 ... g1n
g 21 g 22 .... g 2 n
( y1 , y2 ,..., y n ) ( x1 , x2 ,..., xk ) . . . .
. . . .
g k1 gk 2 ... g kn
394
Linear Block Codes
395
Channel Capacity
( x1 , x2 ,..., xk ) ( y1 , y2 ,..., y n )
Encoder
Noisy
channel
( x1 , x2 ,..., xk ) ( z1 , z 2 ,..., z n )
Decoder
396
Shannon Codes
(0,0)
(0,1)
(1,0)
(1,1)
2 k points 2 n points
397
Shannon Random Codes
398
Error Control Coding:
Low-Density Parity-Check (LDPC) Codes
Ideal codes
Have efficient encoding
Have efficient decoding
Can approach channel capacity
399
t-Error-Correcting Codes
400
Minimum Distance
401
Minimum Distance: Cont.
402
Minimum Distance: Cont.
403
Modern Coding Theory
• Gallager’s idea:
– Find a subclass of random linear codes that
can be decoded efficiently
404
Modern Coding Theory
Encoder
BSC
Iterative
Decoder
405
Introduction to Channel Coding
Noisy channels:
Noisy
Information bits Corrupted bits
10010…10101…
channel 10e10…e01e1…
406
Low-Density Parity-Check Codes
Defined by random sparse graphs (Tanner graphs)
y1 y2 y3 yn
407
Important Recent Developments
Shokrollahi et al.
Capacity-achieving LDPC codes for the binary erasure
channel (BEC)
408
Standard Iterative Decoding over the BEC
01101001 01e0ee01
409
Standard Iterative Decoding: Cont.
f f f f
0 1 e =0 e =1 1 e =1
Decoding is successful!
410
Algorithm A: Cont.
0 e e e 1 1
Stopping Set: S
411
Practical Challenges: Finite-Length Codes
412
Error Floor of LDPC Codes
10-1
Capacity-
approaching LDPC
Codes suffer from
the error floor
10-5
problem
BER
High Error
10 -7
Floor Low Error Floor
10-9
414
Noise and Error Sources
415
The Scaling Law
416
Raw Error Distribution over a Page
417
Properties and Requirements
12
• Error floor: Target BER< 10
418
Ensembles for Non-uniform Error Correction
Check Nodes
Variable Nodes …
c1 c2 ck
419
Ensemble Properties
• Threshold effect
• Concentration
theorem
• Density evolution:
420
Ensemble Properties
421
Design Methodology
422
Performance on VHM
10-2
Rate=.85
Avg degree=6
n= 105
n= 10
10-7
4
10-9
423
Storage Capacity
Information theoretic capacity
for soft-decision decoding: .95Gb
1
LDPC: soft
.84 Gb
LDPC: hard
0.8 .76 Gb
Storage capacity (Gbits)
RS: hard
.52Gb
0.6
0.4
0.2
0
2000 4000 6000
Number of pages 424
Conclusion
425
Modern Coding Theory
426
Chapter 4
Digital
Transmission
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
4.1 Line Coding
Some Characteristics
A signal has two data levels with a pulse duration of 1 ms. We calculate the
pulse rate and bit rate as follows:
Pulse Rate = 1/ 10-3= 1000 pulses/s
Bit Rate = Pulse Rate x log2 L = 1000 x log2 2 = 1000 bps
Example 2
A signal has four data levels with a pulse duration of 1 ms. We calculate the
pulse rate and bit rate as follows:
Solution
At 1 Kbps:
1000 bits sent 1001 bits received1 extra bps
At 1 Mbps:
1,000,000 bits sent 1,001,000 bits received1000 extra bps
Note:
Note:
Note:
Note:
Steps in Transformation
Data Code
Q (Quiet) 00000
I (Idle) 11111
H (Halt) 00100
J (start delimiter) 11000
K (start delimiter) 10001
T (end delimiter) 01101
S (Set) 11001
R (Reset) 00111
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.17 Example of 8B/6T
encoding
Solution
The sampling rate must be twice the highest
frequency in the signal:
Solution
We need 4 bits; 1 bit for the sign and 3 bits for the
value. A 3-bit value can represent 23 = 8 levels (000
to 111), which is more than what we need. A 2-bit
value is not enough since 22 = 4. A 4-bit value is too
much because 24 = 16.
Solution
The human voice normally contains frequencies from 0 to 4000 Hz.
Sampling rate = 4000 x 2 = 8000 samples/s
Note:
Parallel Transmission
Serial Transmission
Note:
In synchronous transmission,
we send bits one after another
without start/stop bits or gaps.
It is the responsibility of the
receiver to group the bits.
Error Detection
and
Correction
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Note:
Single-Bit Error
Burst Error
Note:
A burst error means that 2 or more bits in the data unit have changed.
Redundancy
Parity Check
Checksum
Example 2
Now suppose the word world in Example 1 is received by the receiver without
being corrupted in transmission.
11101110 11011110 11100100 11011000 11001001
The receiver counts the 1s in each character and comes up with even numbers
(6, 6, 4, 4, 4). The data are accepted.
Note:
Simple parity check can detect all single-bit errors. It can detect burst
errors only if the total number of errors in each data unit is odd.
However, it is hit by a burst noise of length 8, and some bits are corrupted.
10100011 10001001 11011101 11100111 10101010
When the receiver checks the parity bits, some of the bits do not follow the
even-parity rule and the whole block is discarded.
10100011 10001001 11011101 11100111 10101010
Note:
Example 6
The CRC-12
x12 + x11 + x3 + x + 1
which has a degree of 12, will detect all burst errors affecting an odd number of
bits, will detect all burst errors with a length less than or equal to 12, and will
detect, 99.97 percent of the time, burst errors with a length of 12 or more.
•All sections are added using one’s complement to get the sum.
Note
:
The receiver follows these steps:
•All sections are added using one’s complement to get the sum.
•If the result is zero, the data are accepted: otherwise, rejected.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Note:
•All sections are added using one’s complement to get the sum.
•If the result is zero, the data are accepted: otherwise, rejected.
Retransmission
4 3 7
5 4 9
6 4 10
7 4 11
Chapter 11 : Error-Control
Coding
Chapter 6
0+0=1+1=0
0 0 1
0+1=1+0=1
B
XOR with 1 complements 1 1 0
variable
• Note dij = w(xi XOR xj)
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 537
Hamming Distance
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 538
x1 = 1110010
x2 = 1100001
y = 1100010
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 539539
The Binary Symmetric Channel
1- p
x0 y0
p
p
x1 y1
1- p
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 540540
BSC and Hamming Distance
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 541541
Geometric Point-of-View
Code set of
all 8, 3-digit 010 011
words
Minimum 110 111
distance = 1 000 001
100 101
In the BSC, any error changes a code word into a code word
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 542542
Reduced Rate Source
010 011
Code set of
4, 3-digit 110 111
words
000 001
Minimum
distance = 2
100 101
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 543543
Error Correction and Detection
Capability
• The distance between two code words is the
number of places in which they differ.
• dmin is the distance between the two codes
which are closest together.
• A code with minimum distance dmin may be
used
– to detect dmin-1 errors, or
– to correct (dmin-1)/2 errors.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 544544
1 2
x1 y x2
Received word is more likely to have come from closest code word
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 545545
Error Detection and Correction
Shannon’s Noisy Channel Coding Theorem:
• To achieve error-free communications
– Reduce Rate below Capacity
– Add structured redundancy
• Increase the distance between code words
Telecommunications Technology
Fall 2007
EE576 Dr. Kousa Linear Block Codes 546546
Parity Check Equation
m0 m1 m2 m3 c1
i n 1
ck m
i 0
i
A
+ 0 1
Addition is modulo 2 B 0 0 1
1 1 0
Code word x7 x6 x5 x4 x3 x2 x1
m4 m3 m2 c3 m1 c2 c1
c1 m1 m2 m4
c2 m1 m3 m4
c3 m2 m3 m4
m4 m3 m2 m1 c3 c2 c1
1 1 1 0 1 0 0
1 1 0 1 0 1 0
1 0 1 1 0 0 1
EE576 Dr. Kousa Linear Block Codes 548
Parity Check and Generator Matrices
1 0 0 0 1 1 1
0 1 0 0 1 1 0
H
0 0 1 0 1 0 1
0 0 0 1 0 1 1
1 1 1 0 1 0 0
G 1 1 0 1 0 1 0
1 0 1 1 0 0 1
EE576 Dr. Kousa Linear Block Codes 549
Codes
m3 m2 m1 m0
Message m
Code word x x6 x5 x4 x3 x2 x1 x0
Transmitted code word x mH
Received word y xe
Error event e has a 1 wherever an error has occurred.
y mH e
T
s yG
T T
s mHG e G
T
s eG
EE576 Dr. Kousa Linear Block Codes 551
Hamming Code
• In the example, n = 7 and k = 4.
• There are r = n - k, or 3, parity check digits.
• This code has a minimum distance of 1, thus all single errors can be
“corrected”.
• If no error occurs the syndrome is 000.
• If more errors occur, the syndrome is another 3-bit sequence.
• Each single error gives a unique syndrome.
• Any single error is more likely to occur than any double, triple, or higher
order error.
• Any non-zero syndrome is most likely to have occurred because the
single error that could cause it occurred, than for any other reason.
• Therefore, deciding that the single error occurred is most likely the
correct decision.
• Hence, the term error correction.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 552
Properties of Binomial Variables
• Given n bits with a probability of error p and a probability of no error q =
1-p.
– The probability of no errors is qn
– The probability of one error is pqn-1
– The probability of k errors is pkqn-k
• It is no problem to show that if p<1/2 then any k-error event is more
likely than any k+1-error event.
– The most likely number of errors is np. When p is very low then the
most likely error event is NO ERRORS, single errors are next most
likely.
• Single-error-correcting codes can be very effective!
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 553
Hamming Codes
• Hamming codes are (n,k) group codes where n = k+r is the
length of the code words k is the number of data bits. R is the
number of parity check bits, and 2r = n + 1.
• Typical codes are
• (7,4), r = 3
• (15,11), r = 4 (24 = 16)
• (63, 57), r = 6
• Hamming codes are ideal, single error correcting codes.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 554
Hamming Code Performance
• If the probability of bit error without coding is pu and pc with
coding
• the probability of a word error without coding is 1 1 1 pu
n
the probability of a word error using a (7,4) Hamming Code is:
1 (1 p )
c
7
7 pc (1 pc ) 6
• pu is the uncoded channel error probability.
• pc is the probability of bit error when Eb/No is reduced to 4/7 ths of
that at which pu was calculated.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 556
Cyclic Code Generation
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 557
Applications of Cyclic Codes
• Cyclic codes (or cyclic redundancy check CRC) are used routinely to
detect errors in data transmission.
• Typical codes are the
– CRC-16: P(X) = X16 + X15 + X2 + 1
– CRC-CCITT: P(X) = X16 + X12 + X5 + 1
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 558
Convolutional Codes
• Block codes are memoryless codes - each output depends only the current k-bit block
being coded.
• The bits in a convolutional code depend on previous source bits.
• The source bits are convolved with the impulse response of a filter.
Why convolutional codes? Because the code set grows exponentially with code length –
the hypothesis being that the Rate could be maintained as n grew, unlike all block codes
– the Wozencraft contribution.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 559
Convolutional Coder
1 1 0 1 0 1 ... 11 10 11 01 01 01 ...
O1
Input
Xi Xi-1 Xi-2
Encoded output
O2
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 560
0
Trellis Diagram
1
00
01
10
11
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 561
1 1 0 1 0 1
11 10 11 01 01 01
00
11
01
01 01 01
10 10
11
11
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 562
Decoding
00
01
10
11
Insert an error in a sequence of transmitted bits and try and decode it.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 563
Sequential Decoding
• Decoder determines the most likely output sequence.
• Compares the received sequence with all possible sequences that might have
been obtained with the coder
• Selects the sequence that is closest to the received sequence.
Viterbi Decoding
• Choose a decoding-window width b in excess of the block length.
• Compute all code words of length b and compare each to the received code
word
• Select that code word closest to the received word.
• Re-encode the decoded frame and subtract from the received word.
Turbo Codes
• Turbo codes were invented by Berrou, Clavieux and Thimajshima in 1993
• Turbo codes achieve excellent error correction capabilities at rates very close to
the Shannon bound
• Turbo codes are concatenated or product codes
• Sequential decoding is used.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 564
Interleaved Concatenated Code
Information
Checks on checks
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 565
Coding and Decoding
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 566
Turbo Code Performance
Information
Source Transmitter Reciever Destination
Signal Received
Signal
Message Message
Message = [1 1 1 Message = [1 1 0
1] 1]
Noise
NoiseSource
= [0 0
1 0]
• Repeats –
Single CheckSum -
Data = [1 1 1 1]
• Truth table:
Message=
A B X-OR
0 0 0 [1 1 1 1]
0 1 1 [1 1 1 1]
1 0 1
1 1 0 [1 1 1 1]
• General form:
Data=[1 1 1 1]
Message=[1 1 1 1 0]
Repeat 3 times:
Shannon Efficiency •This divide W by 3
•It divides overall capacity by at
C W log 2 1 S / N least a factor of 3x.
C is Channel Capacity
W is raw Channel Capacity Single Checksum:
•Allows an error to be detected
S/N is the signal to noise ratio
but requires the message to be
discarded and resent.
•Each error reduces the channel
capacity by at least a factor of 2
because of the thrown away
message.
i
Encoding:
• Multiple Checksums
Message=[a b c d] Message=[1 0 1 0]
r= (a+b+d) mod 2 r=(1+0+0) mod 2 =1
s= (a+b+c) mod 2 s=(1+0+1) mod 2 =0
t= (b+c+d) mod 2
t=(0+1+0) mod 2 =1
Code=[r s a t b c d]
Code=[ 1 0 1 1 0 1 0 ]
i
Stochastic Simulation:
Fig 1: Error Detection s
100%
100,000 iterations
Add Errors to (7,4) data
Percent Errors Detected (%)
90%
No repeat randoms
Measure Error Detection 80%
Results: 70%
Error Detection
•One Error: 100% 60%
A B C T
wo valid code words (blue)
It is really a checksum.
Single Error Detection
No error correction
A C
This is a graphic representation of the “Hamming Distance”
EE576 Dr. Kousa Linear Block Codes 573
W
h
a
Hamming Distance t
Definition: i
The number of elements that need to be changed (corrupted) to s
turn one codeword into another.
The hamming distance from: t
• [0101] to [0110] is 2 bits
h
• [1011101] to [1001001] is 2 bits
• “butter” to “ladder” is 4 characters
• “roses” to “toned” is 3 characters
t
h
e
Allows:
Error DETECTION for For Hamming distances greater than 1
Hamming Distance = 1. an error gives a false correction.
Error CORRECTION for
Hamming Distance =1
EE576 Dr. Kousa Linear Block Codes 575
W
h
Even More Dots a
t
i
s
t
h
e
Allows:
Error DETECTION for • For Hamming distances greater than M
Hamming Distance = 2. 2 an error gives a false correction.
• For Hamming distance of 2 there is
Error CORRECTION for an error detected, but it can not be
Hamming Distance =1. corrected.
Code Space: i
• 2-dimensional s
• 5 element states
Circle packing makes more t
efficient use of the code-space h
e
M
a
Cannon Balls t
i
s
Efficient Circle packing is the same Efficient Sphere packing is the same
as efficient 2-d code spacing as efficient 3-d code spacing t
h
e
M
a
t
• http://wikisource.org/wiki/Cannonball_stacking
• http://mathworld.wolfram.com/SpherePacking.html
EE576 Dr. Kousa Linear Block Codes 578
W
More on Codes h
a
• Hamming (11,7)
t
• Golay Codes
• Convolutional Codes
• Reed-Solomon Error Correction i
• Turbo Codes s
• Digital Fountain Codes
t
An Example h
We will e
• Encode a message
• Add noise to the transmission
M
• Detect the error
• Repair the error
a
t
r
EE576 Dr. Kousa Linear Block Codes 579
W
h
i
To encode our message But why? s
we multiply this matrix
You can verify that: t
1 0 0 0 0 1 1 h
0 e
1
Hamming[1 0 0 0]=[1 0 0 0 0 1 1]
1 0 0 1 0
H Hamming[0 1 0 0]=[0 1 0 0 1 0 1]
0 0 1 0 1 1 0 Hamming[0 0 1 0]=[0 0 1 0 1 1 0] M
Hamming[0 0 0 1]=[0 0 0 1 1 1 1] a
0 0 0 1 1 1 1
t
r
By our message i
code H message x
?
Where multiplication is the logical AND
And addition is the logical XOR
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 1 1 0 0 1 1
Code => [0 1 0 0 0 1 1]
Overview
• Base model
p (i, j ), p(i,j) 0
p f , i, j p(i,j)z f
z , p(i,j)>0
0
• Decoding
– Tanner Graph
– Sum Product Algorithm
Future Work
• Realize these algorithm in computer
• Find some decoding algorithm to speed up
EE576 Dr. Kousa Linear Block Codes 606
Chapter 11
Data Link
Control
and
Protocols
EE576 Dr. Kousa Linear Block Codes 607
11.1 Flow and Error Control
Flow Control
Flow control refers to a set of procedures used to restrict the amount of
data that the sender can send before waiting for acknowledgment.
Error Control
Error control in the data link layer is based on automatic repeat
request, which is the retransmission of data.
Operation
Bidirectional Transmission
Sequence Number
Resending Frames
Operation
EE576 Dr. Kousa Linear Block Codes 617
11.6 Sender sliding window
Operation
Bidirectional Transmission
Pipelining
Solution
The bandwidth-delay product is
The system can send 20,000 bits during the time it takes for the data to go from
the sender to the receiver and then back again. However, the system sends only
1000 bits. We can say that the link utilization is only 1000/20,000, or 5%. For
this reason, for a link with high bandwidth or long delay, use of Stop-and-Wait
ARQ wastes the capacity of the link.
EE576 Dr. Kousa Linear Block Codes 630
Example 2
What is the utilization percentage of the link in Example 1 if the link uses Go-
Back-N ARQ with a 15-frame sequence?
Solution
The bandwidth-delay product is still 20,000. The system can send up to 15
frames or 15,000 bits during a round trip. This means the utilization is
15,000/20,000, or 75 percent. Of course, if there are damaged frames, the
utilization percentage is much less because frames have to be resent.
Frames
Frame Format
Examples
Data Transparency
11.649
EE576 Dr. Kousa Linear Block Codes 649
Figure 11.1 A frame in a character-oriented protocol
11.650
EE576 Dr. Kousa Linear Block Codes 650
Figure 11.2 Byte stuffing and unstuffing
11.651
EE576 Dr. Kousa Linear Block Codes 651
Figure 11.3 A frame in a bit-oriented protocol
11.652
EE576 Dr. Kousa Linear Block Codes 652
Figure 11.4 Bit stuffing and unstuffing
11.653
EE576 Dr. Kousa Linear Block Codes 653
11-2 FLOW AND ERROR CONTROL
11.654
EE576 Dr. Kousa Linear Block Codes 654
Note
11.655
EE576 Dr. Kousa Linear Block Codes 655
11-3 PROTOCOLS
Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another.
11.657
EE576 Dr. Kousa Linear Block Codes 657
11-4 NOISELESS CHANNELS
11.658
EE576 Dr. Kousa Linear Block Codes 658
Figure 11.6 The design of the simplest protocol with no flow or error control
11.659
EE576 Dr. Kousa Linear Block Codes 659
Figure 11.7 Flow diagram for Example 11.1
11.660
EE576 Dr. Kousa Linear Block Codes 660
Figure 11.8 Design of Stop-and-Wait Protocol
11.661
EE576 Dr. Kousa Linear Block Codes 661
Figure 11.9 Flow diagram for Example 11.2
11.662
EE576 Dr. Kousa Linear Block Codes 662
11-5 NOISY CHANNELS
11.663
EE576 Dr. Kousa Linear Block Codes 663
Note
Error correction in Stop-and-Wait ARQ is done by
keeping a copy of the sent frame and retransmitting of the
frame when the timer expires.
In Stop-and-Wait ARQ:
we use sequence numbers to number the frames.
The sequence numbers are based on modulo-2
arithmetic.
In Stop-and-Wait ARQ, the acknowledgment
number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
11.664
EE576 Dr. Kousa Linear Block Codes 664
Figure 11.11 Flow diagram for an example of Stop-and-Wait ARQ.
11.665
EE576 Dr. Kousa Linear Block Codes 665
Example 11.4
11.666
EE576 Dr. Kousa Linear Block Codes 666
Example 11.5
Solution
The bandwidth-delay product is still 20,000 bits. The
system can send up to 15 frames or 15,000 bits during a
round trip. This means the utilization is 15,000/20,000, or
75 percent. Of course, if there are damaged frames, the
utilization percentage is much less because frames have to
be resent.
11.667
EE576 Dr. Kousa Linear Block Codes 667
Note
In the Go-Back-N Protocol, the sequence numbers are
modulo 2m,
where m is the size of the sequence number field in bits.
11.668
EE576 Dr. Kousa Linear Block Codes 668
Figure 11.12 Send window for Go-Back-N ARQ
11.669
EE576 Dr. Kousa Linear Block Codes 669
Note
The send window is an abstract concept defining an
imaginary box of size 2m−1 with three variables: Sf, Sn, and
Ssize.
11.670
EE576 Dr. Kousa Linear Block Codes 670
Figure 11.13 Receive window for Go-Back-N ARQ
11.671
EE576 Dr. Kousa Linear Block Codes 671
Note
11.672
EE576 Dr. Kousa Linear Block Codes 672
Figure 11.15 Window size for Go-Back-N ARQ
11.673
EE576 Dr. Kousa Linear Block Codes 673
Note
11.674
EE576 Dr. Kousa Linear Block Codes 674
Figure 11.16 Flow diagram for Example 11.6
This is an
example
of a case
where the
forward
channel is
reliable,
but the
reverse is
not. No
data
frames
11.675
EE576 Dr. Kousa Linear Block Codes 675
Figure 11.17 Flow diagram for Example 11.7
Scenario
showing what
happens when
a frame is
lost.
11.676
EE576 Dr. Kousa Linear Block Codes 676
Note
11.677
EE576 Dr. Kousa Linear Block Codes 677
Figure 11.18 Send window for Selective Repeat ARQ
11.678
EE576 Dr. Kousa Linear Block Codes 678
Figure 11.19 Receive window for Selective Repeat ARQ
11.679
EE576 Dr. Kousa Linear Block Codes 679
Figure 11.21 Selective Repeat ARQ, window size
11.680
EE576 Dr. Kousa Linear Block Codes 680
Note
11.681
EE576 Dr. Kousa Linear Block Codes 681
Figure 11.22 Delivery of data in Selective Repeat ARQ
11.682
EE576 Dr. Kousa Linear Block Codes 682
Figure 11.23 Flow diagram for Example 11.8
Scenario
showing how
Selective Repeat
behaves when a
frame is lost.
11.683
EE576 Dr. Kousa Linear Block Codes 683
11-6 HDLC
11.684
EE576 Dr. Kousa Linear Block Codes 684
Figure 11.25 Normal response mode
11.685
EE576 Dr. Kousa Linear Block Codes 685
Figure 11.27 HDLC frames
Control field
format for the
different frame
types
11.686
EE576 Dr. Kousa Linear Block Codes 686
Table 11.1 U-frame control command and response
11.687
EE576 Dr. Kousa Linear Block Codes 687
Figure 11.31 Example of piggybacking with error
11.688
EE576 Dr. Kousa Linear Block Codes 688
11-7 POINT-TO-POINT PROTOCOL
11.690
EE576 Dr. Kousa Linear Block Codes 690
Figure 11.33 Transition phases
11.691
EE576 Dr. Kousa Linear Block Codes 691
Figure 11.35 LCP packet encapsulated in a frame
11.692
EE576 Dr. Kousa Linear Block Codes 692
Table 11.2 LCP packets
11.693
EE576 Dr. Kousa Linear Block Codes 693
Table 11.3 Common options
11.694
EE576 Dr. Kousa Linear Block Codes 694
Figure 11.36 PAP packets encapsulated in a PPP frame
11.695
EE576 Dr. Kousa Linear Block Codes 695
Figure 11.37 CHAP packets encapsulated in a PPP frame
11.696
EE576 Dr. Kousa Linear Block Codes 696
Figure 11.38 IPCP packet encapsulated in PPP frame
Code value
for IPCP
packets
11.697
EE576 Dr. Kousa Linear Block Codes 697
A Survey of Advanced
FEC Systems
Eric Jacobsen
Minister of Algorithms, Intel Labs
Communication Technology Laboratory/
Radio Communications Laboratory
July 29, 2004
With a lot of material from Bo Xia, CTL/RCL
www.intel.com/labs
Communication and Interconnect Technolo
Outline
What is Forward Error Correction?
The Shannon Capacity formula and what it means
A simple Coding Tutorial
www.intel.com/labs 699
Communication and Interconnect Technolo
www.intel.com/labs 700
Communication and Interconnect Technolo
A simple example
A system transmits messages of two bits each through a channel
that corrupts each bit with probability Pe.
Tx Data = 01 Rx Data = 00
In this case a single bit error has corrupted the received symbol, but
it is still a valid symbol in the list of possible symbols. The most
fundamental coding trick is just to expand the number of bits
transmitted so that the receiver can determine the most likely
transmitted symbol just by finding the valid codeword with the
minimum Hamming distance to the received symbol.
®
www.intel.com/labs 701
Communication and Interconnect Technolo
Continuing the Simple
Example
A one-to-one mapping of symbol to codeword is produced:
www.intel.com/labs 702
Communication and Interconnect Technolo
Coding Gain
The difference in performance between an uncoded and a coded
system, considering the additional overhead required by the code,
is called the Coding Gain. In order to normalize the power required
to transmit a single bit of information (not a coded bit), Eb/No is used
as a common metric, where Eb is the energy per information bit, and
No is the noise power in a unit-Hertz bandwidth.
703
Communication and Interconnect Technolo
Coding
Coding Gain
Gain and
and Distance
Distance to
to Channel
Channel Capacity
Capacity Example
Example
C, R = 3/4 C, R = 9/10
0.1
1.62 3.2 Uncoded
“Matched-Filter
0.01 Bound”
Performance
3
1 10
These curves
Compare the
performance of
BER = Pe
4
1 10
two Turbo
Codes with a
1 10
5 concatenated
Viterbi-RS
system. The
d = ~1.4dB Coding Gain = ~5.95dB
1 10
6 TC with R =
d = ~2.58dB Coding Gain = ~6.35dB 9/10 appears to
be inferior to
1 10
7
1 2 3 4 5 6 7 8 9 10 11
the R = ¾ Vit-
Eb/No (dB) RS system, but
R = 3/4 w/RS
R = 9/10 w/RS Capacity for R = 3/4 is actually
VitRs R = 3/4
Uncoded QPSK
operating
closer to
capacity.
®
www.intel.com/labs 704
Communication and Interconnect Technolo
Shannon’s Paper
1948
Early practical
Hamming
defines basic implementations
binary codes of RS codes for tape
Gallager’s Thesis
and disk drives
On LDPCs
BCH codes Berlekamp and Massey
Proposed Viterbi’s Paper rediscover Euclid’s
On Decoding polynomial technique
Convolutional Codes and enable practical
Reed and Solomon algebraic decoding
define ECC Forney suggests
Technique concatenated codes
www.intel.com/labs 705
Communication and Interconnect Technolo
Ungerboeck’s
TCM Paper - 1982
LDPC beats
RS codes appear
in CD players Turbo Codes
For DVB-S2
Berrou’s Turbo Code
First integrated Standard - 2003
Paper - 1993
Viterbi decoders Renewed interest
(late 1980s) Turbo Codes in LDPCs due to TC
Adopted into Research
Standards
TCM Heavily (DVB-RCS, 3GPP, etc.)
Adopted into
Standards
www.intel.com/labs 706
Communication and Interconnect Technolo
Block
Block Codes
Codes
Generally, a block code is any code defined with a finite codeword length.
The Code Rate, R, can be adjusted by shortening the data field (using zero padding)
or by “puncturing” the parity field.
www.intel.com/labs 707
Communication and Interconnect Technolo
Convolutional
Convolutional Codes
Codes
Convolutional codes are typically decoded using the Viterbi algorithm, which increases in
complexity exponentially with the constraint length. Alternatively a
sequential decoding algorithm can be used, which requires a much longer constraint length
for similar performance.
www.intel.com/labs 708
Communication and Interconnect Technolo
Convolutional
Convolutional Codes
Codes -- II
II
www.intel.com/labs 709
Communication and Interconnect Technolo
Concatenated
Concatenated Codes
Codes
Data
RS Conv. Viterbi
Interleaver Channel
Encoder Encoder Decoder
Inner Code
Outer Code
Data
De- RS
Interleaver Decoder
www.intel.com/labs 710
Communication and Interconnect Technolo
Concatenating
Concatenating Convolutional
Convolutional
Codes
Codes
Parallel and serial
Data
CC CC Viterbi/APP
Interleaver Channel
Encoder1 Encoder2 Decoder
Serial Concatenation
Data
De- Viterbi/APP
Interleaver Decoder
Data Data
Viterbi/APP
Channel Combiner
Decoder
CC
Encoder1
De- Viterbi/APP
CC Interleaver Decoder
Interleaver
Encoder2
www.intel.com/labs 711
Communication and Interconnect Technolo
Iterative
Iterative Decoding
Decoding of
of CCCs
CCCs
Rx Data
Viterbi/APP
Interleaver
Decoder Data
De- Viterbi/APP
Interleaver Decoder
Turbo Codes add coding diversity by encoding the same data twice through
concatenation. Soft-output decoders are used, which can provide reliability update
information about the data estimates to the each other, which can be used during a
subsequent decoding pass.
The two decoders, each working on a different codeword, can “iterate” and continue
to pass reliability update information to each other in order to improve the probability
of converging on the correct solution. Once some stopping criterion has been met,
the final data estimate is provided for use.
These Turbo Codes provided the first known means of achieving decoding
performance close to the theoretical Shannon capacity.
®
www.intel.com/labs 712
Communication and Interconnect Technolo
MAP/APP decoders
Maximum A Posteriori/A Posteriori Probability
Two names for the same thing
Basically runs the Viterbi algorithm across the data sequence in both
directions
~Doubles complexity
Becomes a bit estimator instead of a sequence estimator
Optimal for Convolutional Turbo Codes
Need two passes of MAP/APP per iteration
Essentially 4x computational complexity over a single-pass Viterbi
Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a
suboptimal simplification compromise
www.intel.com/labs 713
Communication and Interconnect Technolo
www.intel.com/labs 714
Communication and Interconnect Technolo
Turbo
Turbo Code
Code Performance
Performance II
II
0.1
The performance curves shown here 1.629 2.864
BER
QPSK at R = ½ is 0.2dB. 1 10
5
www.intel.com/labs 715
Communication and Interconnect Technolo
Tricky
Tricky Turbo
Turbo Codes
Codes
Repeat Accumulate
Section Section
1:2 Interleaver D +
R = 1/2 R=1
Outer Code Inner Code
Since the differential encoder has R = 1, the final code rate is determined by the
amount of repetition used.
www.intel.com/labs 716
Communication and Interconnect Technolo
Turbo
Turbo Product
Product Codes
Codes
Parity
Arranges the data in a 2-dimensional array,
Data Field
and then applies a hamming code to each
row and column as shown.
Since the constituent codes are Hamming codes, which can be decoded simply, the
decoder complexity is much less than Turbo Codes. The performance is close to capacity
for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs
have enjoyed commercial success in streaming satellite applications.
www.intel.com/labs 717
Communication and Interconnect Technolo
Low Density Parity Check
Codes
Iterative decoding of simple parity check codes
First developed by Gallager, with iterative decoding, in 1962!
Published examples of good performance with short blocks
Kou, Lin, Fossorier, Trans IT, Nov. 2001
www.intel.com/labs 718
Communication and Interconnect Technolo
Check Nodes
Edges
Variable Nodes
(Codeword bits)
www.intel.com/labs 719
Communication and Interconnect Technolo
Iteration
Iteration Processing
Processing
1st half iteration, compute ’s,’s, and r’s for each edge.
Check Nodes
i+1 = maxx(i,qi) Edges
i = maxx(i+1,qi) (one per parity bit)
ri = maxx(i,i+1)
ri
qi
mVn
www.intel.com/labs 720
Communication and Interconnect Technolo
LDPC
LDPC Performance
Performance Example
Example
www.intel.com/labs 721
Communication and Interconnect Technolo
Current State-of-the-Art
Block Codes
Reed-Solomon widely used in CD-ROM, communications standards.
Fundamental building block of basic ECC
Convolutional Codes
K = 7 CC is very widely adopted across many communications standards
K = 9 appears in some limited low-rate applications (cellular telephones)
Often concatenated with RS for streaming applications (satellite, cable, DTV)
Turbo Codes
Limited use due to complexity and latency – cellular and DVB-RCS
TPCs used in satellite applications – reduced complexity
LDPCs
Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e
Complexity concerns, especially memory – expect broader consideration
www.intel.com/labs 722
Cyclic Codes for Error Detection
W. W. Peterson and D. T. Brown
by
Maheshwar R Geereddy
Notations
k = Number of binary digits in the message
before encoding
n = Number of binary digits in the encoded
message
n – k = number of check bits
n k
• Computer X n – k G (X)
• R(X) = X n – k G (X) / P(X)
• Add the remainder to the X n – k G (X)
– F(X) = X n – k G (X) + R(X)
Implementation
• Briefly, to encode a message, G (X), n-k zeros are annexed (I.e.
multiplication of Xn-1G (X) is performed) and then Xn-1G (X) is
divided by the polynomial P (X) of degree n-k. The remainder is
then subtracted from Xn-1G (X). (It replaces the n-k zeroes).
• This encoded message is divisible by P (X) for checking out
errors
0 -> 10 00 1
0 -> 11 10 1
0 -> 11 01 1
1 -> 11 00 0
1 -> 11 10 0
0 -> 11 11 0
1 -> 01 11 1
0 -> 00 01 0
1 -> 00 00 1
1 -> 00 10 1
Conclusion
• Cyclic codes for error detection provides high efficiency and the ease of
implementation.
• It provides standardization like CRC-8 and CRC-32
. .8 1.0
8 S .
8
EE576 Dr. Kousa Linear Block Codes 747
Digital Transmission with
Convolutional Codes
BSC
p
p
1 , a 2 ,..., a N
Information a b1 , b2 ,..., bN
Viterbi
Sink Algorithm
BN
Define
D( B N , AN ) Hamming distance between sequences
Equivalently
min D ( A N , B N ) log( p /(1 p)
a1 , a2 ,..., a N
Initial state - s1 s2 0
Output
Input
T T 111 100 010 110 011 001 000
110100
S1 S2
State 01 i
0 2 i s1
0 3 i s1 s2
Initial state - s1 s2 0
0-000
1 -111 0 -010
0 -011
1 -110
10 1 -100
11 1 -101
EE576 Dr. Kousa Linear Block Codes 751
Trellis Representation
State output Next state
0 input
s1s2
1 input
00 000 00
111
01 01
001
110
10 011 10
100
11 11
010
101
N 1 N 1
min ( D( A , B ) N N
min ( D( A ,B ) d (aN , bN ))
s1 , s2 ,..., s N s1 , s2 ,..., s N 1 , s N
N 1 N 1
min ( D( A , B ) min N N
min ( D( A ,B ) d (aN , bN ))
s1 , s2 ,..., s N sN s1 , s2 ,..., s N 1/ S N
N
Transmitter Channel Equalizer
a p(t iT )
i 1
i
Decisions VA
N
z (t ) ai h(t iT ) n(t )-Received signal
i 1
ri j h(t iT )h(t jT )dt
0
k k k
D( Z1 ,..., Z k , sk m 1 ,..., sk ) 2 a i Z i ai a j ri j Accumulated distance
i 1 i 1 j 1
k 1
d ( Zk ; sk 1 , sk ) 2ak Z k 2ak
i k m
ai rk i ak2r0 Incremental distance
Controlled ISI
Same model applies to
Output
Partial Response signaling
d m(t )
e(t ) * h(t )
dt
2 xk h(t kT ) where xk ak ak 1
k 0 Nyquist pulse
0.3
0;even no. ones
-0.2
0 0.2 0.4 0.6 0.8 1
xk
-0.7
1;odd no. ones
-1.2
EE576 Dr. Kousa Signaling interval Linear Block Codes 761
Merges and State Reduction
Optimal paths through trellis
EE576
where L optical blur width
Dr. Kousa Linear Block Codes 763
Row Scan
Rainy Rainy
wet dry
No rain
Showery Showery
wet dry
Rainfall observations
• DNA-double helix
– Sequences of four nucleotides, A,T,C and G
– Pairing between strands
– Bonding
•Genes
A T and C G
–Made up of Cordons, i.e. triplets of adjacent nucleotides
–Overlapping of genes
Nucleotide sequence
CGGATTC
Gene 1
Cordon A in three genes
Gene 2
Gene 3
EE576 Dr. Kousa Linear Block Codes 767
Hidden Markov Chain
Tracking genesM S
1 P1
S-start first cordon of gene
P1-4- +1,…,+4 from start
Gene M2 P2
E-stop
H-gap
M1-4 -1,…,-4 from start
M3
P3
Initial and
Transition M4
Probabilities known P4
EE576
H
.
Dr. Kousa Linear Block Codes
E
768
Recognizing Handwritten Chinese Characters
Text-line images
Set up m X n grid
Next Slide
Results
EE576 Dr. Kousa Linear Block Codes 769
Example Segmenting Handwritten
Characters
Eliminating
All possible Redundant
segmentation Paths
paths
Removal of Discarding
Overlapping near paths
paths