0% found this document useful (0 votes)
146 views

Linear Block Codes

This document discusses linear block codes. It begins by introducing Hamming distance, which measures the difference between codewords. It then defines parameters for codes such as minimum distance, code rate, and code length. It explains that linear block codes add parity bits to information blocks and discusses encoding and decoding using generator and parity check matrices. It provides examples of forming these matrices and transforming generator matrices. The key points are that linear block codes add redundancy for error detection and correction and use algebraic properties to encode and decode messages.

Uploaded by

achin mponline
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views

Linear Block Codes

This document discusses linear block codes. It begins by introducing Hamming distance, which measures the difference between codewords. It then defines parameters for codes such as minimum distance, code rate, and code length. It explains that linear block codes add parity bits to information blocks and discusses encoding and decoding using generator and parity check matrices. It provides examples of forming these matrices and transforming generator matrices. The key points are that linear block codes add redundancy for error detection and correction and use algebraic properties to encode and decode messages.

Uploaded by

achin mponline
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

Information Theory

Linear Block Codes

Jalal Al Roumy
Hamming distance
The intuitive concept of “closeness'' of two words is formalized
through Hamming distance d (x, y) of words x, y.
For two words (or vectors) x, y; d (x, y) = the number of symbols x
and y differ.
Example: d (10101, 01100) = 3, d (first, second, fifth) = 3
Properties of Hamming distance
(1) d (x, y) = 0; iff x = y
(2) d (x, y) = d (y, x)
(3) d (x, z) ≤ d (x, y) + d (y, z) triangle inequality

An important parameter of codes C is their minimal distance.


d (C) = min {d (x, y) | x, y ε C, x ≠ y}, because it gives the smallest
number of errors needed to change one codeword into another.
Theorem Basic error correcting theorem
(1) A code C can detect up to s errors if d (C) ≥ s + 1.
(2) A code C can correct up to t errors if d (C) ≥ 2t + 1.
Note – for binary linear codes d (C) = smallest weight W (C) of
non-zero codeword, 2
Some notation

Notation: An (n,M,d) - code C is a code such that


• n - is the length of codewords.
• M - is the number of codewords.
• d - is the minimum distance in C.

Example:
C1 = {00, 01, 10, 11} is a (2,4,1)-code.
C2 = {000, 011, 101, 110} is a (3,4,2)-code.
C3 = {00000, 01101, 10110, 11011} is a (5,4,3)-code.

Comment: A good (n,M,d) code has small n and large M and d.

3
Code Rate
For q-nary (n,M,d)-code we define code rate, or
information rate, R, by
lg q M
R .
n

The code rate represents the ratio of the number of


input data symbols to the number of transmitted code
symbols. log (64)
R 2
 6 / 32
For a Hadamard code 32
eg, this is an important
parameter for real implementations, because it shows
what fraction of the bandwidth is being used to transmit
actual data.
Recall that log2(n) = ln(n)/ln(2)
4
Equivalence of codes
Definition Two q -ary codes are called equivalent if one can be obtained from
the other by a combination of operations of the following type:
(a) a permutation of the positions of the code.
(b) a permutation of symbols appearing in a fixed position.
Let a code be displayed as an M ´ n matrix. To what correspond operations
(a) and (b)?
Distances between codewords are unchanged by operations (a), (b).
Consequently, equivalent codes have the same parameters ( n,M,d) (and
correct the same number of errors).

Examples of equivalent codes


0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 1 2
0 0 1 1 0 1 1 0 1
1  

   2 1 1 1 1 2 0
1 1 1 1 1 1 0 1 1 1 2 2 2   2 0 1 
1 1 0 0 0 1 1 0 1 0    

Lemma Any q -ary (n,M,d) -code over an alphabet {0,1,…,q -1} is equivalent to an
(n,M,d) -code which contains the all-zero codeword 00…0.
5
The main coding theory problem
A good (n,M,d) -code has small n, large M and large d.
The main coding theory problem is to optimize one of the
parameters n, M, d for given values of the other two.
Notation: Aq (n,d) is the largest M such that there is an q
-nary (n,M,d) -code.

6
Introduction to linear codes
A linear code over GF(q) [Galois Field] where q is a prime power is a subset of
the vector space V(n, q) for some positive value of n.
C is a subspace of V(n, q) iff
(1) u  v  C for all u and v in C
(2) a.u  C for all u  C, a  GF (q)
A binary code is linear iff the sum of any two codewords is a codeword.
If C is a k - dimentional subspace of V (n, k) the the linear code is called an
[n, k] - code or and [n, k, d] - code if the distance is added.

A q - ary [n, k, d] - code is a q - ary (n, q k , d) - code but of course not every
(n, q k , d) - code is an [n, k, d] - code.
the all - zero vector 0 automatically belongs to a linear code.
The weight w(x) of a vector in V(n, q) is defined to be the number of non - zero entries of x.
The minimum distance of a linear code is equal to the smallest of the weights of the non - zero
codewords.
7
Linear Block Codes
• Information is divided into blocks of length k
• r parity bits or check bits are added to each block
(total length n = k + r),.
• Code rate R = k/n
• Decoder looks for codeword closest to received vector
(code vector + error vector)
• Tradeoffs between
• Efficiency
• Reliability
• Encoding/Decoding complexity

8
Linear Block Codes
Parity
Message Generator Code Code check
Null
vector matrix Vector Vector vector
matrix
m G 0
C C H T

Operations of the generator matrix and the parity check matrix

The parity check matrix H is used to detect errors in the received code by
using the fact that c * HT = 0 ( null vector)

Let x = c e be the received message; c is the correct code and e is the error

Compute S = x * HT =( c e ) * HT =c HT e HT = e HT

If S is 0 then message is correct else there are errors in it, from common
known error patterns the correct message can be decoded.

9
Linear Block Codes
• Linear Block Code
The block length C of the Linear Block Code is
C=mG
where m is the information codeword block length, G is
the generator matrix.
G = [Ik | P] k × n,
I is unit matrix.
• The parity check matrix
H = [PT | In-k ], where PT is the
transpose of the matrix p.

10
Forming the generator matrix
The generator matrix is formed from the list of codewords by
ignoring the all zero vector and the linear combinations; eg
0 0 0 0 0 0 0
1 1 1 1 1 1 1
 
1 0 0 0 1 0 1
 
1 1 0 0 0 1 0
0 1 1 0 0 0 1
 
1 0 1 1 0 0 0
0 1 0 1 1 0 0 1 1 1 1 1 1 1
  1
0 0 1 0 1 1 0 0 0 0 1 0 1
C giving G   
0 0 0 1 0 1 1 1 1 0 0 0 1 0
   
0 1 1 1 0 1 0 0 1 1 0 0 0 1
 
0 0 1 1 1 0 1
1 0 0 1 1 1 0
 
0 1 0 0 1 1 1
1 0 1 0 0 1 1
 
1 1 0 1 0 0 1
1 1 1 0 1 0 0 

C  [7, 4, 3]  code 11
Equivalent linear [n,k]-codes
Two k x n matrices generate equivalent linear codes over
GF(q) if one matrix can be obtained from the other by a
sequence of operations of the following types:
(R1) permutation of rows
(R2) multiplication of a row by a non-zero scaler
(R3) Addition of a scaler multiple of one row to another
(C1) Permutation of columns
(C2) Multiplication of any column by a non-zero scaler

The row operations (R) preserve the linear independence


of the rows of the generator matrix and simply replace
one basis by another of the same code. The column
operations (C) convert the generator matrix to one for an
equivalent code. 12
Transforming the generator matrix
Transforming to the form G = [Ik | P]

1 1 1 1 1 1 1  1 1 1 1 1 1 1  1 0 0 0 1 0 1
1 0 0 0 1 0 1  0 1 1 1 0 1 0  0 1 1 1 0 1 0
    
1 1 0 0 0 1 0  0 0 1 1 1 0 1  0 0 1 1 1 0 1
     
0 1 1 0 0 0 1  0 1 1 0 0 0 1  0 1 1 0 0 0 1
1 0 0 0 1 0 1  1 0 0 0 1 0 1
0 1 0 0 1 1 1  0 1 0 0 1 1 1
   
0 0 1 1 1 0 1  0 0 1 0 1 1 0
   
0 0 0 1 0 1 1  0 0 0 1 0 1 1
1 0 0 0 1 0 1
0 1 0 0 1 1 1
Therefore G  [I k | A]   
0 0 1 0 1 1 0
 
0 0 0 1 0 1 1
13
Encoding with the generator
Codewords = message vector u x G
For example, where
1 0 0 0 1 0 1 
0 1 0 0 1 1 1 
G  then
0 0 1 0 1 1 0 
 
 0 0 0 1 0 1 1 
0 0 0 0 is encoded as 0 0 0 0 0 0 0
1 0 0 0 is encoded as 1 0 0 0 1 0 1
1 1 1 0 is encoded as 1 1 1 0 1 0 0

Note that 1110 is found by adding R1 , R2 and R3 of G

14
Parity-check matrix
A parity check matrix H for an [n, k]-code C is and
(n - k) x n matrix such that x. HT = 0 iff x  C. A parity-
check matrix for C is a generator matrix for the duel
code C . If G = [Ik | A] is the standard form generator
matrix for an [n, k]-code C, then the parity-check matrix
for C is H = [-AT | In-k ]. A parity check matrix of the
form [B | In-k ] is said to be in standard form.
1 0 a11 . . a1,n  k 
 . . . 
G 
 . . . 
 
0 1 ak1 . . ak,n  k 

 a11 . . ak1 1 0
 . . 
H 
 . . 
 
 a1,n  k . . ak,n  k 0 1 15
Decoding using Slepian matrix
An elegant nearest-neighbour decoding scheme was
devised by Slepian in 1960.
• every vector in V(n, q) in in some coset of C
• every coset contains exactly qk vectors
• two cosets are either disjoint or coincide
 1 0 1 1
Let  G      giving C  {0000, 1011, 0101, 1110}
 0 1 0 1
codewords  0000 1011 0101 1110
1000 0011 1101 0110
0100 1111 0001 1010
0010 1001 0111 1100
coset
leaders
When  y is received  (eg 1111) its  position is  found. The decoder
16
decides that  the error is the cos et  leader 0100.   x  y  e  1011.
Syndrome decoding
Suppose C is a q-ary [n, k]-code with the parity-check
matrix H. For any vector y = V(n, q), the row vector
S(y) = y HT is called the syndrome of y. Two vectors have
the same syndromes iff they lie in the same coset.
 1 0 1 1 1 0 1 0 
G         H  1 1 0 1          
 0 1 0 1  
The sydromes of  the cos et  leaders  from our example
S(0000)  00
S(1000)  11
      this will then give a syndrome lookup table
S(0100)  01
S(0010)  10
Syndromex cos et  leader  f (x)
00 0000
11 1000
01 0100
10 0010 17
Decoding procedure
The rules:

Step 1   For a received vector y calculate S(y)  yH T


Step 2 Let x  S(y), and locate z in the first column of the look-up table
Step 3 Decode y as y - f (x).

For example, if  y  1111, then S(y)  01 and we decode as 1111  0100  1011

18
Example
 1 0 0 1
G          codewords  2 2  4        vectors  2 4  16
 0 1 1 1
 0000 1001 0111 1110 
1000 0001 1111 0110 
Slepian matrix is   
 0100 1101 0011 1010 
 
 0010 1011 0101 1100 
0011 is decoded as 0111
1110  1110
Parity  check matrix 
0 1
0 1 1 0  1 1
H          S(y)  yH     S(y)  y 
T

1 1 0 1  1 0
 0 1 
00 0000
01 1000
Syndrome lookup table         
11 0100
10 0010

19

You might also like