Introduction To Error Control Coding
Introduction To Error Control Coding
2/03/2010
Introduction
Outline
What is error control coding? Why is error control coding important? How does error control coding work? Where does error control coding fit in? Encoding Decoding Performance Analysis BSC Channel Bandwidth efficiency vs power efficiency.
2/03/2010 Introduction 2
The goal of error control coding is to encode information in such a way what even if the channel or storage medium introduces errors, the receiver can correct the errors and recover the original transmission.
2/03/2010
Introduction
2/03/2010
Introduction
2/03/2010
Introduction
2/03/2010
Introduction
2/03/2010
Introduction
2/03/2010
Introduction
2/03/2010
Introduction
10
Outline
What is error control coding? Why is error control coding important? How does error control coding work? Where does error control coding fit in? Encoding Decoding Performance Analysis BSC Channel Bandwidth efficiency vs power efficiency.
2/03/2010 Introduction 11
Digital Channel
(Writing Unit)
Noise
Destination
Source Decoder
Channel Decoder
Demodulator
(Reading Unit)
2/03/2010
Introduction
14
Encoding (1/5)
Aim: Add redundant bits to original data message to form coded message k digits are encoded into a sequence of n digits, called a codeword. In general n>k k message length n code length R = k/n code rate Classified into two main types Block Codes codes of fixed length without memory Convolutional Codes codes with memory
2/03/2010
Introduction
15
Encoding (2/5)
Block codes a message of k digits is mapped into a structured sequence of n digits Properties N-k redundant bits are added to each message for protection against errors Each encoding operation is independent (no memory) of past encodings. Example: k=3, n=6 Data Message = 100 111 Coded Message = 0 1 1 1 0 0 0 0 0 1 1 1 In general, both message and coded message are binary symbols, 0 and 1. There are thus 2^k distinct messages and 2^k corresponding binary codewords
2/03/2010
Introduction
16
Encoding (3/5)
Example 1.1: Let k = 3 and n = 6. The following table gives a block code of length 6. The code rate is R = 1/2.
Message Codeword
(0 0 0) (1 0 0) (0 1 0) (1 1 0) (0 0 1) (1 0 1) (0 1 1) (1 1 1)
( 0 0 0 0 0 0) (0 1 1 1 0 0) (1 0 1 0 1 0) (1 1 0 1 1 0) (1 1 0 0 0 1) (1 0 1 1 0 1) (0 1 1 0 1 1) (0 0 0 1 1 1)
2/03/2010
Introduction
17
Encoding (4/5)
Convolutional codes each k digits is mapped into a n digit coded block Properties The n-digit coded block depends not only on the input k-digit message block but also on m ( 1) previous message blocks. That is, the encoder has memory of order m. The collection of all possible code sequences is called an (n,k,m) convolutional code
2/03/2010
Introduction
18
Encoding (5/5)
Example 1.2: Let k = 1, n = 2 and m = 2. The following circuit generates a (2,1,2) convolutional code.
+
vl(1)
cl
cl 1
cl 2
exclusive-OR gate
ct
vl(1) = cl + cl 2 vl( 2) = cl + cl 1 + cl 2 .
2/03/2010
Introduction
19
Decoding (1/3)
Aim: Recover the original coded message from the received sequence of bits In more detail: Suppose a codeword corresponding to a certain message is transmitted over a noisy channel. Let r be the corresponding received sequence. The receiver (or decoder), based on r, the encoding rules and the noise characteristics of the channel, makes a decision which message was actually transmitted. This decision making operation is called decoding. The device which performs the decoding operation is called a decoder.
2/03/2010 Introduction 20
Decoding (2/3)
c
v Channel Encoder v Digital Channel
Channel Decoder
Suppose the codeword v is transmitted. Let r be the corresponding output of the digital channel. The decoder must produce an estimate of the message v based on r. Decoding error occurs if v v 1-1 relationship between c and v, means c c A decoding rule is a strategy for choosing an estimated message for each possible received sequence . Obviously, we would like to devise a decoding rule such that the probability of a decoding error is minimised. Such a decoding rule is called an optimum decoding rule.
2/03/2010 Introduction 21
Decoding (3/3)
Suppose all the messages are equally likely. An optimum decoding can be done as follows. For every codeword compute the conditional probability P( r | v j ). The codeword with the largest conditional probability is chosen as the estimate for the transmitted codeword. This decoding rule is also called the maximum likelihood decoding (MLD). MLD Advantage: lowest possible decoding error MLD Disadvantage: May be highly complex Goal: find low complex decoding schemes to achieve as close as MLD as possible Some low complex schemes capable of achieving MLD performance in certain scenarios.
2/03/2010 Introduction 22
Outline
What is error control coding? Why is error control coding important? How does error control coding work? Where does error control coding fit in? Encoding Decoding Performance Analysis BSC Channel Bandwidth efficiency vs power efficiency.
2/03/2010 Introduction 23
2/03/2010
Introduction
25
Channel Decoder
How do we model the digital channel? One common way is Binary Symmetric Channel (BSC) If the channel is an additive white Gaussian noise (AWGN) channel, hard decision made by the demodulator results in a BSC.
1-p 0 0
p=probability of error
p
1 1-p
2/03/2010
Introduction
26
1 1-p
2/03/2010
Introduction
27
2/03/2010
Introduction
29
Summary of Lecture (1/2) What is error control coding? Why is error control coding important? How does error control coding work? Where does error control coding fit in?
2/03/2010
Introduction
30
2/03/2010
Introduction
31
Future lectures
Focus on different encoding/decoding schemes Each encoding/decoding scheme has different properties which make it suitable for different needs of applications Application Needs: low complexity, good performance We will try to understand these properties, encoding and decoding proccesses. Need understanding of Binary Field and Vector Spaces (Lecture 2)
2/03/2010 Introduction 32