TE361 Channel Coding 1
TE361 Channel Coding 1
Channel Coding
“If you can’t explain it simply, you don’t understand it well enough”
- Albert Einstein
Channel Coding
Introduction
• In practice
– Most media are not perfect – noisy channels:
• Wireless link
– satellite
• Modem line
• Hard disk
• Can we
– recover the original message (without errors) after passing
through a noisy channel?
• How
– can we communicate in reliable fashion over noisy channels?
Introduction
• The communication model
– we are using consists of a source that generates discrete symbols/
digital information
• This information is sent to a destination through a CHANNEL
• The channel
– can be associated with noise so we have two cases
1. Noiseless case
– The channel in this case transmits symbols without causing any
errors
2. Noisy case
– The channel in this case introduces noise that causes errors in the
received symbols at the destination
» Noise can
• be defined as any unwanted signal or effect in addition to
the desired signal
» To reduce the errors due to noise we would add systematic
redundancy to the information to be sent
• This done through CHANNEL CODING
Introduction
• Assumptions
1. That the information generated at the source is ‘source-coded’
or compressed into a string of binary digits
2. Discrete Memoryless Channel (DMC)
• Each symbol is sent over the channel independently of the previous
symbols sent
• the behavior of the channel and the effect of the noise at time t will
not depend on the behavior of the channel or the effect of the noise
at any previous time
Introduction
• We defined
– information rate R and learnt that arbitrarily reliable
communication is possible for any R < C
• In this lecture
– we will study noisy channel models and compute the channel
capacity for such channels
– We will design and analyze block codes
Introduction
• Shannon put forth a stochastic model of the channel
– Input alphabet A
– Output alphabet B
– Probability transition matrix P(b/a)
• that expresses the probability of observing the output symbol b given that the
symbol a was sent
• describes the transmission behavior of the channel
C max I ( A ; B )
p( x)
I ( A ; B ) H ( A) H ( A / B )
– Here
• H(A) is source entropy
• H(A/B) is the average information lost per symbol
Discrete Memoryless Channel
• We need
– a mathematical model of data/information transmission over
unreliable communication channels
• Popular DMC models include
– The Binary Symmetric Channel (BSC)
– The Binary Erasure Channel (BEC)
Binary Symmetric Channel (BSC)
• This is a channel
– when you input a bit, 0 or 1, with
• probability (1 - p) it passes through the channel intact
• probability p it gets flipped to the other parity
• That is
– the probability of error or bit error rate (BER) is p
P y 1 / x 0 P y 0 / y 1 p
P ( y 0 / x 0) P ( y 1 / x 0
P
P ( y 0 / x 1) P ( y 1 / x 1)
1 p p
P
p 1 p
• The mutual information is
I ( X ; Y ) H X H X / Y H (Y ) H (Y / X )
C 1 H ( p)
– where H(p) is the binary entropy function
Binary Erasure Channel (BEC)
• Another effect
– that noise may have is to prevent the receiver from deciding
whether the symbol was a 0 or 1 (loss of signal)
• In this case
– the output alphabet includes an additional symbol “e”
• called the erasure symbol that denotes a bit that was not able to be
detected
• For a
– binary input {0, 1} the output alphabet consists of three symbols
{0, e, 1}
• This information channel
– is called a Binary Erasure Channel
– experienced bit lost
– BEC does not model/capture the effect of bit inversion
Binary Erasure Channel (BEC)
P( y 0 / x 0) P( y e / x 0) P( y 1 / x 0)
P
P ( y 0 / x 1) P ( y e / x 1) P ( y 1 / x 1)
1 0
P
0 1
Binary Erasure Channel (BEC)
C 1
• BEC
– Important model for wireless, mobile and satellite communication
channels
Discrete Memoryless Channel
• To fully specify
– the behavior of an information channel it is necessary to specify
• the characteristics of the input
• as well as the channel matrix
• We will assume
– that the input characteristics are described by a probability
distribution over the input alphabet with
• p(xi) denoting the probability of symbol xi being input to the channel
• If the channel
– is fully specified then the output can be calculated by
P y j p y j / x i p x i
r
i 0
Discrete Memoryless Channel
• Note well
– p y / x is forward probabilities
j i
– p x / y is backward probabilities
i j
P xi , y j P y j / xi P xi
p xi / y j
P y j P y j
Problem
• Consider
– the binary information channel fully specified by
P x 0 2 / 3 3 / 4 1 / 4
and P
P x 1 1 / 3 1 / 8 7 / 8
– Represented as
P y j p y j / x i p x i
r
i 0
P y 0 P y 0 / x 0 P x 0 P y 0 / x 1 P x 1
3 2 1 1 13
4 3 8 3 24
11
P y 1 1 P y 0
24
Solution
• The backward probabilities are:
P ( y 0 / x 0) P ( x 0)
P x 0 / y 0
P ( y 0)
(3 / 4)( 2 / 3) 12
13 / 24 13
12 1
P x 1 / y 0 1 P x 0 / y 0 1
13 13
P ( y 1 / x 1) P ( x 1)
P x 1 / y 1
P ( y 1)
(7 / 8)(1 / 3) 7
11 / 24 11
7 4
P x 0 / y 1 1 P x 1 / y 1 1
11 11
Solution
• The average uncertainty
– we have about the input after the channel out yj is observed is
given by
H X / y j P( x / y j ) log 2
1
xX p( x / y j )
1 1
H X / y 0 P x 0 / y 0log 2 P x 1 / y 0log 2
p ( x 0 / y 0) p ( x 1 / y 0)
12 13 1
log 2 log 2 13 0.387
13 12 13
Solution
– Let say we observe an output of y = 1
• Then
1 1
H X / y 1 P x 0 / y 1log 2 P x 1 / y 1log 2
p( x 0 / y 1) p( x 1 / y 1)
4 11 7 11
log 2 log 2 0.946
11 4 11 7
P( y 0) H ( X / y 0) P( y 1) H ( X / y 1)
13 11
0.387 0.946
24 24
0.643
– Thus
• the average information lost per symbol is through the channel is 0.643
Solution
• The source entropy is
1
H X p( x ) log 2
p( x )
1 1
p( x 0) log 2 p( x 1) log 2
p ( x 0) p( x 1)
2 3 1 3
log 2 log 2 0.918
3 2 3 1
• The capacity
– of the channel in the presence of noise is
C H ( X ) H ( X /Y )
0.918 0.643 0.275 bits / channel use