0% found this document useful (0 votes)
24 views

Optimum Receiver

The document discusses optimal receivers for communication over an additive white Gaussian noise channel. It describes the transmitter model, channel model, and optimal receiver structure. The optimal receiver consists of a detector with a bank of matched filters followed by a decision device. The matched filters provide sufficient statistics that are independent Gaussian random variables to minimize the probability of error at the decision device.

Uploaded by

likahab960
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Optimum Receiver

The document discusses optimal receivers for communication over an additive white Gaussian noise channel. It describes the transmitter model, channel model, and optimal receiver structure. The optimal receiver consists of a detector with a bank of matched filters followed by a decision device. The matched filters provide sufficient statistics that are independent Gaussian random variables to minimize the probability of error at the decision device.

Uploaded by

likahab960
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Optimal Receivers for the AWGN Channel

Transmitter Model:
Consider a generic M -ary communication system where signal sm (t) is used to convey symbol m ∈ M, where
M = {0, 1, . . . , M − 1} is the symbol alphabet. The set of signals, {sm (t) | m ∈ M}, can be represented with K
orthonormal basis signals {φk (t) | k = 0, 1, . . . , K − 1}, with
K−1
X
sm (t) = sm,k φk (t)
k=0
where the weights are
Z T
sm,k = hsm (t), φk (t)i = sm (t)φk (t) dt
0
for all m ∈ M and k ∈ {0, 1, . . . , K − 1}.
Note: The Gram-Schmidt procedure not only describes how to find the basis signals and the weights, but also
proves that a set of basis signals exists for any set of finite-energy data signals.
Additive White Gaussian Noise (AWGN) Channel Model:
Suppose symbol m ∈ M was transmitted. The received signal is represented by
rc (t) = sm (t) + wc (t)
where
sm (t) = transmitted data signal
wc (t) = additive white Gaussian noise signal
– stationary random process
– Gaussian distribution
– zero mean ⇒ µw = E [wc (t)] = 0
N0
– white noise ⇒ φw (τ ) = E [wc (t)wc (t + τ )] = 2 δ(τ )
Receiver:
The purpose of the receiver is to determine the transmitted symbol, m, based on observations of rc (t). Because of
uncertainty introduced by the noise it is impossible to guarantee that the receiver will be able to correctly determine
the transmitted symbol.
Optimal Receiver: An optimal receiver is one that is designed to minimize the probability that a decision error
occurs. There exists no other receiver structure that can provide a lower probability of error.
The optimal receiver can be separated into two stages, a detector, which filters and samples the received signal,
and a decision device, which uses the samples to make its decision.
r Decision
rc (t) Detector m
Device
b

Detector – extracts a set of “sufficient statistics” from rc (t).


Decision Device – attempts to determine the transmitted symbol, m, based on r = [r0 r1 · · · rK−1 ].

SYSC 3503 1
Optimal Detector:
– filters and samples the received signal.
– optimal detector is composed of a bank of K matched filters (or correlators).
– filters are matched to the basis signals, {φk (t) | k = 0, 1, . . . , K − 1}.
RT
rc (t) φ0 (T − t) r0 rc (t) dt r0
t=T 0 t=T
φ0 (t)
RT
φ1 (T − t) r1 dt r1
t=T 0 t=T
or
φ1 (t)
equivalently

RT
φK−1 (T − t) rK−1 dt rK−1
t=T 0 t=T
φK−1 (t)

For k ∈ {0, 1, . . . , K − 1}, the received samples are


Z T
rk = rc (t)φk (t) dt [but rc (t) = sm (t) + wc (t)]
0
Z T Z T
= sm (t)φk (t) dt + wc (t)φk (t) dt
0 0
= sm,k + wk ,
where
Z T
wk = wc (t)φk (t) dt
0
represents a sample of noise, and {sm,k } are the weights for signal sm (t). That is,
Z T
sm,k = hsm (t), φk (t)i = sm (t)φk (t) dt
0
The samples r represent the projection of the received signal onto the signal space defined by {φk (t)|k = 0, 1, . . . , K−
1}.
Properties of wk :
– Since wc (t) is a Gaussian random process, wk is a Gaussian random variable.
– Mean: "Z # Z
T T
E [wk ] = E wc (t)φk (t) dt = E [wc (t)] φk (t) dt = 0 .
0 0
– Covariance: "Z #
T Z T
E [wk wl ] = E wc (t1 )φk (t1 ) dt1 wc (t2 )φl (t2 ) dt2
0 0
Z T Z T
= E [wc (t1 )wc (t2 )] φk (t1 )φl (t2 ) dt1 dt2
0 0
T T
N0
Z Z
= δ(t2 − t1 )φk (t1 )φl (t2 ) dt1 dt2
0 0 2
N0 T
Z
= φk (t2 )φl (t2 ) dt2
2
 0
N0 /2, if k = l
=
0, if k 6= l
N0
= δl−k .
2

SYSC 3503 2
Properties of rk :
– Mean:
E [rk ] = E [sm,k + wk ] = sm,k + E [wk ] = sm,k .
– Covariance:
h  i
E rk − E [rk ] rl − E [rl ] = E [wk wl ]
N0
δl−k =
2
N0 /2, if k = l
= .
0, if k 6= l
– Distribution: {rk } are a set of independent Gaussian random variables, with rk ∼ N (sm,k , N0 /2).

Residue:
In general, we cannot perfectly reconstruct rc (t) from the samples r, so by sampling the filter outputs, some
information has been lost. This lost information is the residual error,
K−1
X
re (t) = rc (t) − rk φk (t) .
k=0
However, re (t) contains no relevant information to help in determining m.
Proof:

K−1
X
re (t) = sm (t) + wc (t) − [sm,k + wk ] φk (t)
k=0
K−1
X K−1
X
= sm (t) − sm,k φk (t) + wc (t) − wk φk (t)
k=0 k=0
K−1
X
= sm (t) − sm (t) + wc (t) − wk φk (t)
k=0
K−1
X
= wc (t) − wk φk (t)
k=0
= we (t) ,
where
K−1
X
we (t) = wc (t) − wk φk (t) .
k=0
Since wc (t) and wk are not based on m, we (t) has the same value regardless of the transmitted signal.
Therefore it will be of no direct assistance in determining m. However, we (t) may provide some information
about wk , which could "be used indirectly to determine!# m. But
K−1
X
E [wk we (t)] = E wk wc (t) − wl φl (t)
l=0
" K−1
#
X
= E [wk wc (t)] − E wk wl φl (t)
l=0
"Z # K−1
T X
=E wc (τ )φk (τ ) dτ wc (t) − E [wk wl ] φl (t)
0 l=0
T K−1
N0
Z X
= E [wc (τ )wc (t)] φk (τ ) dτ − δl−k φl (t)
0 2
l=0
T
N0 N0
Z
= δ(t − τ )φk (τ ) dτ − φk (t)
0 2 2
N0 N0
= φk (t) − φk (t)
2 2
= 0.

SYSC 3503 3
–Therefore, we (t) and wk are uncorrelated for all t ∈ [0, T ] and k ∈ {0, 1, . . . , K − 1}.
–Therefore, we (t) is independent of wk for all t ∈ [0, T ] and k ∈ {0, 1, . . . , K − 1}.
–Therefore, we (t) contains no information about wk .
–Therefore, knowledge of we (t) is of no assistance in determining m.
–The samples r are a set of sufficient statistics for determining m. There is no additional information
in rc (t) that is relevant.
Optimal Decision Device
The decision device must make a decision about which symbol was transmitted based on the received observations,
r. An optimal decision device is one that makes this decision in such a manner that the probability of a symbol
error is minimized. Let m b be the decision made by the device.
Defn : The a priori probability distribution is the probability distribution of the transmitted symbols before any
data has been received. It is denoted by Pr {m sent}. Typically, each symbol is equally likely to have been
transmitted, so Pr {m sent} = 1/M .
Defn : The a posteriori probability distribution (APP) is the probability distribution of the transmitted symbols
after the received signal has been observed. It is denoted by Pr {m sent | r received}.
Defn : The conditional probability density function fr (r | m sent) is the pdf of observing r at the output of the
detector, given that symbol m was transmitted. This is referred to as the likelihood function.

Maximum A Posteriori Probability (MAP) Decision Rule

To minimize the probability of an error, the decision device must maximize the probability that its decision is
correct. It chooses m
b = m, for the value of m with the largest a posteriori probability. That is, choose m
b = m if
Pr {m sent | r received} ≥ Pr {l sent | r received} for all l 6= m ,
or equivalently
b = arg max Pr {m sent | r received} .
m
m
This is known as the maximum a posteriori probability (MAP) decision rule.
Example: Consider a system where one of M = 4 possible values could have been transmitted. Suppose, based on
the received signal, the receiver calculates the following APP’s
Pr {0 sent | r received} = 0.2
Pr {1 sent | r received} = 0.1
Pr {2 sent | r received} = 0.4
Pr {3 sent | r received} = 0.3
According to the MAP decision rule, the receiver would chose mb = 2, since it is most likely to have been
transmitted based on the observations of the received signal.
Note: The probability of error in this case is 0.6, but any other choice for m
b would lead to a higher
probability of error.
The APP’s can be calculated from the likelihood function, fr (r | m sent), with
fr (r | m sent)Pr {m sent} fr (r | m sent) Pr {m sent}
Pr {m sent | r received} = = PM −1
fr (r) 0 0
m0 =0 fr (r | m sent) Pr {m sent}
For the AWGN channel, since the components of r = [r0 r1 · · · rK−1 ] are independent, and each rk has a Gaussian
distribution with a mean of sm,k and a variance of N0 /2, the likelihood function is
K−1
Y
fr (r | m sent) = frk (rk |m sent)
k=0
K−1
(rk − sm,k )2
 
Y 1
= p exp −
2π(N0 /2) 2(N0 /2)
k=0
( K−1
)
1 1 X 2
= √ exp − (rk − sm,k ) .
( πN0 )K N0
k=0

SYSC 3503 4
Maximum Likelihood (ML) Decision Rule

Under certain conditions, the MAP decision rule can be simplified. Usually, all the symbols are equally likely to
be transmitted, so the a priori probabilities Pr {m sent} = 1/M , so the MAP decision rule can be expressed as:
fr (r | m sent)1/M
m
b = arg max
m fr (r)
or
b = arg max fr (r | m sent) .
m
m
This is know as the maximum likelihood (ML) decision rule. Note that the ML decision rule is equivalent to the
MAP decision rule if the a priori probabilities are all equal.

Simplifications to the ML Decision Rule

Using the expression given above for the likelihood function, the ML decision rule can then be expressed as:
( K−1
)
1 1 X
mb = arg max √ exp − (rk − sm,k )2
m ( πN0 )K N0
k=0
or ( )
K−1
1 X 2
mb = arg max exp − (rk − sm,k )
m N0
k=0
or (by taking the log)
K−1
!
1 X
b = arg max −
m (rk − sm,k )2
m N0
k=0
or
K−1
X
m
b = arg min (rk − sm,k )2 .
m
k=0
PK−1 2
But, k=0 (rk −sm,k )2 = r − sm , the square of the distance between r and the point in the signal space diagram
corresponding to sm (t). Therefore the ML decision rule reduces to:
b = arg min r − sm
m .
m
In other words, the optimal decision is that symbol that is “closest” to r in the signal space.

Example: Suppose the M = 4 two dimensional signal constellation shown below is used for transmission, and that

r = [0.3, 0.8] Es is observed. The observation is marked in the signal space diagram shown below:
φ1 (t)
s1

r
p
r − s0 = 1.13Es
p
r − s1 = 0.13Es
s2 s0
p
φ0 (t) r − s2 = 2.33Es

Es p
r − s3 = 3.33Es

s3
The decoder would choose m
b = 1.

SYSC 3503 5
Decision Regions:
Each possible received observation will be closest to one of the points in the signal constellation. For each
signalling scheme, it is useful to draw the boundaries of the decision regions on the signal space diagram
Region Z1

φ1 (t)
s1

s2 s0
Region Z2 Region Z0
φ0 (t)

s3

Region Z3

Further Simplifications to the ML Decision Rule

The ML decision rule can also be expressed as:


K−1
X
m
b = arg min (rk2 − 2rk sm,k + s2m,k )
m
k=0
or !
K−1
X K−1
X K−1
X
m
b = arg min rk2 −2 rk sm,k + s2m,k
m
k=0 k=0 k=0
or !
K−1
X
b = arg min −2
m rk sm,k + Em
m
k=0
or !
K−1
X
m
b = arg max rk sm,k − Em /2 .
m
k=0
If all signals have equal energy (i.e., Em = El ∀ m, l), then the ML decision rule can be expressed as:
K−1
X
m
b = arg max rk sm,k .
m
k=0

SYSC 3503 6

You might also like