Lecture 1
Lecture 1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Outline
1 Introduction
3 Repetition codes
Repetition codes
Review of probability theory
Decoding of the repetition codes
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
What’s information?
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Outline
1 Introduction
3 Repetition codes
Repetition codes
Review of probability theory
Decoding of the repetition codes
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
In all these cases, if we transmit data, e.g. a string of bits, over the
channel, there is some probability that the received message will
not be identical to the transmitted message.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
In all these cases, if we transmit data, e.g. a string of bits, over the
channel, there is some probability that the received message will
not be identical to the transmitted message.
we would prefer to have a communication channel for which this
probability was zero – or so close to zero that for practical purpose
it is indistinguishable from zero.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
We denote the ith channel input by x1 and the ith channel output
by yi . Given channel input Xi = vi ∈ {0, 1} and channel output
yi ∈ {0, 1}, the BSC is completely characterized by the channel
transition probabilities p(yi |xi ) given by
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
0
(1
- f)
0
f
1 -
R
1
(1 f)
Here f = 0.1.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
En oder De oder
t
6r
- Noisy
hannel
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Outline
1 Introduction
3 Repetition codes
Repetition codes
Review of probability theory
Decoding of the repetition codes
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Repetition codes
Source Transmitted
Sequence Sequence
s t
0 000
1 111
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Repetition codes
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
In what sense?
Having the smallest probability of being wrong.
So given r, we need to find the most probable value of s.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Probability space
Definition (σ-field)
Let F be a collection of subsets of a non-empty set Ω. Then F is
called a σ-field (or σ-algebra) if the following conditions hold:
Ω ∈ F.
F is closed under complementation: If A ∈ F , then
Ac := {ω ∈ Ω : ω ∈/ A} ∈ F.
F is closed under countable unions: if Ai ∈ F for i ∈ N, then
∪∞
i=1 Ai ∈ F .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Probability space
Conditional Probability
Definition
If A is any set in F with P(A) > 0, we define PA (·) on F as
follows:
P (A ∩ E)
PA (E) = .
P (A)
Clearly PA is a probability measure on F, and is called the
conditional probability relative to A.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Bayes’ theorem
Theorem
Let {An } be a countable measurable partition of Ω, and E ∈ F
with P (E) > 0; then we have for each m:
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
P (r|s = 1) ∏
N
P (rn |tn (1))
= ;
P (r|s = 0) P (rn |tn (0))
n=1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
P (r|s = 1) ∏
N
P (rn |tn (1))
= ;
P (r|s = 0) P (rn |tn (0))
n=1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
P (r|s = 1) ∏
N
P (rn |tn (1))
= ;
P (r|s = 0) P (rn |tn (0))
n=1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Examples
s 0 0 1 0 1 1 0
z}|{ z}|{ z}|{ z}|{ z}|{ z}|{ z}|{
t 000 000 111 000 111 111 000
n 000 001 000 000 101 000 000
r 0|{z}
00 0 01 1
|{z} 11 0
|{z} 00 0
|{z} 1 0 1|{z}
|{z} 1 1 0|{z}
00
^s 0 0 1 0 0 1 0
orre ted errors ?
undete ted errors ?
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
∑
N ( )
N n
pb = f (1 − f )N −n ,
n
n=⌈(N +1)/2⌉
for odd N .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
∑
N ( )
N n
pb = f (1 − f )N −n ,
n
n=⌈(N +1)/2⌉
for odd N .
For N = 3, we have pb = 0.028.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
∑
N ( )
N n
pb = f (1 − f )N −n ,
n
n=⌈(N +1)/2⌉
for odd N .
For N = 3, we have pb = 0.028.
Q: how many repetitions are required to get the probability
error down to 10−15 ?
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
∑
N ( )
N n
pb = f (1 − f )N −n ,
n
n=⌈(N +1)/2⌉
for odd N .
For N = 3, we have pb = 0.028.
Q: how many repetitions are required to get the probability
error down to 10−15 ?
Answer: About 61.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
0.1
R5 R1
0.1 R1 0.01 R3
0.08
1e-05 more useful codes
pb
0.06
0.04 1e-10
R3
0.02
R5 more useful codes
R61 R61
0 1e-15
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Rate Rate
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Outline
1 Introduction
3 Repetition codes
Repetition codes
Review of probability theory
Decoding of the repetition codes
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Block code
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
t5 1
s1 s2 1 0
s3 0
t7 s4 t 1 0
6 0
(a) (b)
The first four transmitted bits t1 t2 t3 t4 , are set equal to four source
bits, s1 s2 s3 s4 . The parity-check bits t5 t6 t7 are set so that the
parity within each circle is even.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
The following are the sixteen codewords {t} of the (7,4) Hamming
code.
Any pair of codewords differ from each other in at least three bits.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
t = GT s,
r5
r1 r2
r3
r7 r4 r
6
(a)
1 0* 1
1 1* 1 0 1 0
0 0 1*
1 0 0 1 0 0 1 0
0
(b) ( ) (d)
1 1
1
1*
0 - 1
1*
1
0* 0 0 0* 0 0
(e) (e0 )
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
In modulo 2 arithmetic, −1 ≡ 1, so
[ ] 1 1 1 0 1 0 0
H = P, I3 = 0 1 1 1 0 1 0
1 0 1 1 0 0 1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Hn = z,
where z = Hr.
A decoding algorithm that solves this problem is called a
maximum-likelihood decoder.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
1 ∑
K
pb = ̸ sk ).
P (ŝk =
K
k=1
Outline
1 Introduction
3 Repetition codes
Repetition codes
Review of probability theory
Decoding of the repetition codes
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
0.1
R5 R1
0.1 R1 0.01 H(7,4)
0.08
1e-05 more useful codes
H(7,4) pb
0.06 BCH(511,76)
0.02 BCH(15,7)
BCH(1023,101)
R5 more useful codes
0 1e-15
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Rate Rate
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
0.1
R5 R1
0.1 R1 0.01
0.08
1e-05
H(7,4) pb
0.06
0.04 1e-10
R3 achievable not achievable
0.02
R5
achievable not achievable
0 1e-15
C C
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Rate Rate
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .