0% found this document useful (0 votes)
34 views

M 4 L 19

This document discusses maximum likelihood detection and correlation receivers. It explains maximum likelihood detection principles, likelihood functions, and how a correlation receiver implements maximum likelihood detection using a correlation detector and vector receiver to determine the received vector and make decisions based on metrics involving inner products between the received vector and signal constellation points.

Uploaded by

RupaShaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

M 4 L 19

This document discusses maximum likelihood detection and correlation receivers. It explains maximum likelihood detection principles, likelihood functions, and how a correlation receiver implements maximum likelihood detection using a correlation detector and vector receiver to determine the received vector and make decisions based on metrics involving inner products between the received vector and signal constellation points.

Uploaded by

RupaShaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Version 2 ECE IIT, Kharagpur

Module
4
Signal Representation
and Baseband
Processing

Version 2 ECE IIT, Kharagpur









Lesson
19
Maximum Likelihood
Detection and
Correlation Receiver




Version 2 ECE IIT, Kharagpur
After reading this lesson, you will learn about
Principle of Maximum Likelihood (ML) detection;
Likelihood function;
Correlation receiver;
Vector receiver;

Maximum likelihood (ML) detection:
We start with the following assumptions:
Number of information-bearing signals (symbols), designed after the G-S-O
approach, is M and one of these M signals is received from the AWGN
channel in each time slot of T-sec. Let the messages be denoted by m
i
, i =1, 2,
, M. Each message, as discussed in an earlier lesson, may be represented by a
group of bits, e.g. by a group of m bits each such that 2
m
=M.

All symbols are equi- probable. This is not a grave assumption since, if the input
message probabilities are different and known, they can be incorporated following
Bayesian approach. However, for a bandwidth efficient transmission scheme, as is
often needed in wireless systems, the source coding operation should be
emphasized to ensure that all the symbols are independent and equally likely.
Alternatively, the number of symbols, M, may also be decided appropriately to
approach this desirable condition.

AWGN process with a mean =0 and double-sided psd No/2. Let w(t) denote a
noise sample function over 0 t <T.

Let R(t) denote the received random process with sample function over a symbol
duration denoted as r(t), 0 t T. Now, a received sample function can be expressed
in terms of the corresponding transmitted information-bearing symbol, say s
i
(t), and a
sample function w(t) of the Gaussian noise process simply as:
r(t) = s
i
(t) + w(t), 0 t <T 4.19.1

At the receiver, we do not know which s
i
(t) has been transmitted over the interval
0 t <T. So, the job of an efficient receiver is to make best estimate of transmitted
signal [s
i
(t)] upon receiving r(t) and to repeat the same process during all successive
symbol intervals. This problem can be explained nicely using the concept of signal
space, introduced earlier in Lesson #16. Depending on the modulation and transmission
strategy, the receiver usually has the knowledge about the signal constellation that is in
use. This also means that the receiver knows all the nominal basis functions used by the
transmitter. For convenience, we will mostly consider a transmission strategy involving
two basis functions,
1
and
2
(described now as unit vectors) for explanation though
most of the discussion will hold for any number of basis functions. Fig.4.19.1 shows a
two-dimensional signal space showing a signal vector
i s
and a received vector
r
. Note
the noise vector
.
as well.
Version 2 ECE IIT, Kharagpur















Fig. 4. 19.1 Signal space showing a signal vector i s and a received vector r

The job of the receiver can now be formally restated as: Given received signal
vectors r , find estimates i m

for all valid transmit symbols m


i
-s once in each symbol
duration in a way that would minimize the probability of erroneous decision of a symbol
on an average (continuous transmission of symbols is implicit).

The principle of Maximum Likelihood (ML) detection provides a general solution
to this problem and leads naturally to the structure of an optimum receiver. When the
receiver takes a decision that i m m =

, the associated probability of symbol decision error


may be expressed as:
( )
,
i
Pe m r
=probability of decision on receiving r that m
i
was
transmitted =Pr (m
i
not sent
r
) =1 Pr (m
i
sent
r
).

In the above, Pr (m
i
not sent
r
) denotes the probability that m
i
was not transmitted
while r is received. So, an optimum decision rule may heuristically be framed as:

Set i m m =

if Pr (m
i
sent
r
) Pr (m
k
sent
r
), for all k i 4.19.2

This decision rule is known as maximum a posteriori probability rule. This rule requires
the receiver to determine the probability of transmission of a message from the received
vector. Now, for practical convenience, we invoke Bayes rule to obtain an equivalent
statement of optimum decision rule in terms of a priori probability:


( ) ( )

( )
( )
i i
1
joint probability
a priori prob. a posteriori prob.
of r
of r given 'm' of 'm' given r
i i i
M
r m r r r r r m r m =


4.19.3

( ) r m r i : A posteriori probability of m
i
given r

r
i s
Signal
vector
Received
Vector
Noise Vector
0

2
Version 2 ECE IIT, Kharagpur
( ) r r : J oint pdf of r

, defined over the entire set of signals { s


i
(t) }; independent of
any specific message m
i

( ) i m r r : Probability that a specific r will be received if the message m
i
is transmitted;
known as the a priori probability of r

given m
i

( ) i m r :
1
M


From Eq. 4.19.3, we see that determination of maximum a posteriori probability is
equivalent to determination of maximum a priori probability ( ) i m r r . This a priori
probability is also known as the likelihood function.

So the decision rule can equivalently be stated as:
Set i m m =

if ( ) i m r r is maximum for k =i

Usually,
( ) r k
ln p r|m

, i.e. natural logarithm of the likelihood function is considered. As


the likelihood function is non-negative, another equivalent form for the decision rule is:
Set i m m =

if ln [ ( ) i m r r ] is maximum for k =i 4.19.4



A Maximum Likelihood Detector realizes the above decision rule.Towards this, the
signal space is divides in M decision regions, Z
i
, i =1, 2, , M such that,

( )
i
vector r lies inside 'Z 'if,
ln is maximum for k =i
r k
P r m


4.19.5

Fig. 4.19.2 indicates two decision zones in a two-dimensional signal space. The received
vector r lies inside region Z
i
if
( ) r k
ln p r|m

is maximum for k =i.
















Fig. 4.19.2 Decision zones in a two-dimensional signal space
2

Z
1
Z
2

1
s
2
s
Version 2 ECE IIT, Kharagpur
Now for an AWGN channel, the following statement is equivalent to ML
decision:
Received vector r

lies inside decision region Z


i
if, ( )
2
1

=

N
j
kj j s r is minimum for k =i 4.19.6
That is, the decision rule simply is to choose the signal point i s if the received vector r


is closest to i s in terms of Euclidean distance. So, it appears that Euclidean distances of a
received vector r

from all the signal points are to be determined for optimum decision-
making. This can, however, be simplified. Note that, on expansion we get,

( )
2
2 2
1 1 1 1
2 .
N N N N
j kj j j kj kj
j j j j
r s r r s s
= = = =
= +

4.19.7

It is interesting that, the first term on the R.H.S, i.e.,
2
1
N
j
j
r
=

is independent of k and
hence need not be computed for our purpose. The second term,

=
N
j
kj j s r
1
. 2 is the inner
product of two vectors. The third term, i.e.
2
1
N
kj
j
s
=

is the energy of the k-th symbol. If the


modulation format is so chosen that all symbols carry same energy, this term also need
not be computed. We will see in Module #5 that many popular digital modulation
schemes such as BPSK, QPSK exhibit this property in a linear time invariant channel.

So, a convenient observation is: the received vector r

lies in decision region


i
Z if,
1
1
2
N
j kj k
j
r s E
=

is maximum for k =i

That is, a convenient form of the ML decision rule is:
Choose i m m =

if
1
1
2
N
j kj k
j
r s E
=

is maximum for k =i 4.19.8



A Correlation Receiver, consisting of a Correlation Detector and a Vector Receiver
implements the M L decision rule [4.19.8] by, (a) first finding r

with a correlation
detector and then (b) computing the metric in [4.19.8] and taking decision in a vector
receiver. Fig. 4.19.3 shows the structure of a Correlation Detector for determining the
received vector r from the received signal r(t). Fig. 4.19.4 highlights the operation of a
Vector Receiver.




Version 2 ECE IIT, Kharagpur
































Fig. 4.19.3 The structure of a Correlation Detector for determining the received vector r
from the received signal r(t)













0
T
dt

1 1 1 i i
r s = +
1
( ) t


0
T
dt

2
r
2
( ) t


0
T
dt

( )
N
t
( ) r t

0 t T

t T =
N
r
t T =
t T =
Received
Vector
1
2
1
:
N
N
r
r
r
r




=





Version 2 ECE IIT, Kharagpur

































Fig. 4. 19.4 Block schematic diagram for the Vector Receiver

Features of the received vector
r

We will now discuss briefly about the statistical features of the received vector r as
obtained at the output of the correlation detector [Fig. 4.19.3]. The j-th element of r ,
which is obtained at the output of the j-th correlator once in T second, can be expressed
as:
[ ]
0 0
( ) ( ) ( ) ( ) ( )
T T
j j i j
r r t t dt s t w t t dt = = +



ij j
s w = + ; j=1,2,.., N 4.19.9
Estimate of m
M s
2 s

Inner product

Inner product
1 s
r

1
1
2
E

Inner product
( )
1 , r s

2
1
2
E
( )
2 , r s

1
2
M
E
( )
,
M
r s








Select
the
Largest
m

r

Inner product
( )
, i r s
i s

i s
Accumulator
( )
, i r s
SCALAR
Version 2 ECE IIT, Kharagpur
Here w
j
is a Gaussian distributed random variable with zero mean and s
ij
is a scalar signal
component of i s . Now, the mean of the correlator out put is,
j ij j ij ij rj
E r E s w E s s m = + = = =

, say. We note that the mean of the correlator out
put is independent of the noise process. However, the variances of the correlator outputs
are dependent on the strength of accompanying noise:

2
r
j
j
Var r =


( )
2
2
j ij j
E r s E w

= =





0 0
( ) ( ) ( ) ( )
T T
j j
E w t t dt w u u du

=





0 0
( ) ( ). ( ) ( )
T T
j j
E t u w t w u dtdu

=




Taking the expectation operation inside, we can write

( ) ( )
2
0 0
( ) ( ) .
T T
rj j j
t u E w t w u dtdu =


0 0
( ) ( ) ( , )
T T
j j w
t u R t u dtdu =

4.19.10

Here, R
w
(t-u) is the auto correlation of the noise process. As we have learnt earlier,
additive white Gaussian noise process is a WSS random process and hence the auto-
correlation function may be expressed as, ( ) ( ) ,
w w
R t u R t u = and further,
( ) ( )
0
2
w
N
R t u t u = , where N
o
is the single-sided noise power spectral density in
Watt/Hz. So, the variance of the correlator output now reduces to:
2 0
0 0
( ) ( ) ( )
2
T T
rj j j
N
t u t u dtdu =


2 0 0
0
( )
2 2
T
j
N N
t dt = =

4.19.11

It is interesting to note that the variance of the random signals at the out puts of all
N correlators are a) same, b) independent of information-bearing signal waveform and c)
dependent only on the noise psd.

Now, the likelihood function for s
i
(t), as introduced earlier in Eq.4.19.3 and the
ML decision rule [4.19.5] , can be expressed in terms of the output of the correlation
detector. The likelihood function for m
i
= ( ) i m r r
( ) ( )
( )
i i
r r
f r m f r s t = = , where,
f
r
(r|m
i
) is the conditional pdf of r given m
i
.

Version 2 ECE IIT, Kharagpur
In our case,
( )
( )
1
j
N
i r j i
r
j
f r m f r m
=
= , i =1,2,,M 4.19.12
where,
( )
j
r j i
f r m is the pdf of a Gaussian random variable with mean s
ij
& var. =
2
j
r
=
0
2
N
, i.e.,
( )
2
2
( )
2
2
1
.
2
j ij
rj
j
j
r s
r j i
r
f r m e

= 4.19.13
Combining Eq. 4.19.12 and 4.19.13, we finally obtain,

( )
( ) ( )
2
2
0
1
0
1
.exp
N N
i j ij
r
j
f r m N r s
N


=

=


, i=1,2,..,M 4.19.14
This generic expression is of fundamental importance in analyzing error performance of
digital modulation schemes [Module #5].


Problems

Q4.19.1) Consider a binary transmission scheme where a bit 1 is represented by +1.0
and a bit 0 is represented by 1.0. Determine the basis function if no carrier
modulation scheme is used. If the additive noise is a zero mean Gaussian
process, determine the mean values of r
1
and r
2
at the output of the correlation
detector. Further, determine E
1
and E
2
as per Fig 4.19.4.

You might also like