0% found this document useful (0 votes)
104 views

DC Digital Communication PART6

The document discusses matched filters and maximum likelihood receivers. It begins by explaining what a matched filter is and how it works to maximize the signal-to-noise ratio. It then derives the matched filter mathematically, showing that the impulse response of the matched filter is the time-reversed and delayed version of the input signal. Finally, it discusses how a maximum likelihood receiver decomposes the received signal into its constituent parts in signal space in order to make the optimal decision about which signal was transmitted.

Uploaded by

ARAVIND
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views

DC Digital Communication PART6

The document discusses matched filters and maximum likelihood receivers. It begins by explaining what a matched filter is and how it works to maximize the signal-to-noise ratio. It then derives the matched filter mathematically, showing that the impulse response of the matched filter is the time-reversed and delayed version of the input signal. Finally, it discusses how a maximum likelihood receiver decomposes the received signal into its constituent parts in signal space in order to make the optimal decision about which signal was transmitted.

Uploaded by

ARAVIND
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 152

Matched Filters

By: Andy Wang


What is a matched filter? (1/1)
 A matched filter is a filter used in communications to
“match” a particular transit waveform.
 It passes all the signal frequency components while
suppressing any frequency components where there is
only noise and allows to pass the maximum amount of
signal power.
 The purpose of the matched filter is to maximize the
signal to noise ratio at the sampling point of a bit stream
and to minimize the probability of undetected errors
received from a signal.
 To achieve the maximum SNR, we want to allow through
all the signal frequency components, but to emphasize
more on signal frequency components that are large and
so contribute more to improving the overall SNR.
Deriving the matched filter (1/8)
 A basic problem that often arises in the study of communication systems is that
of detecting a pulse transmitted over a channel that is corrupted by channel noise
(i.e. AWGN)
 Let us consider a received model, involving a linear time-invariant (LTI) filter of
impulse response h(t).
 The filter input x(t) consists of a pulse signal g(t) corrupted by additive channel
noise w(t) of zero mean and power spectral density N o/2.
 The resulting output y(t) is composed of g o(t) and n(t), the signal and noise
components of the input x(t), respectively.

x(t )  g (t )  w(t ), 0t T


y (t )  g o (t )  n(t )
Signal x(t) LTI filter of impulse y(t) y(T)
∑ response
g(t)
h(t) Sample at
Linear receiver time t = T
White noise
w(t)
Deriving the matched filter (2/8)
 Goal of the linear receiver
 To optimize the design of the filter so as to minimize
the effects of noise at the filter output and improve the
detection of the pulse signal.
 Signal to noise ratio is:

| g o (T ) |2 | g o (T ) |2
SNR  
n
2 2
E n (t ) 
where |go(T)|2 is the instantaneous power of the filtered signal, g(t) at
point t = T, and σn2 is the variance of the white gaussian zero mean
filtered noise.
Deriving the matched filter (3/8)
 We sampled at t = T because that gives you the max
power of the filtered signal.
 Examine go(t):
 Fourier transform

Go ( f )  G ( f ) H ( f )
so g o (t )   G ( f ) H ( f )e j 2ft df

then | g o (t ) |2 |  G ( f ) H ( f )e j 2ft df |2
Deriving the matched filter (4/8)
 Examine σn2:

 
 n 2  E n t  2  E  n t   but this is zero mean so
  E  n t  
n
2 2
and recall that

E  n t    var n t    R  0  autocorrelation at   0
2
n

Rn     S n  f  e j 2f
df autocorrelation is inverse
Fourier transform of power
Rn  0    S  f  1 df
n
spectral density
Deriving the matched filter (5/8)
 Recall:
filter
SX(f) H(f) SX(f)|H(f)|2 = SY(f)
No
 In this case, SX(f) is PSD of white gaussian noise, S X ( f ) 
2
 Since Sn(f) is our output:
No
Sn ( f )  | H  f  |2
2
 n  E  n(t ) 2   Rn  0  
No N
| H  f  |2 df  o  | H  f  |2 df so
2

2 2
|  H ( f )G ( f )e j 2fT df |
SNR 
No
 | H  f  | 2
df
2
Deriving the matched filter (6/8)
 To maximize, use Schwartz Inequality.

|  x 
 1 dx  
 | 2 Requirements: In this
case, they must be finite
|   x 
 2 dx   | 2 signals.
 
|  1  x   2  x dx |2   | 1  x  |2 dx  |  2  x  |2 dx
 
This equality holds if φ1(x) =k φ2*(x).
Deriving the matched filter (7/8)
 We pick φ1(x)=H(f) and φ2(x)=G(f)ej2πfT and want to make the
numerator of SNR to be large as possible
|  H ( f )G ( f )e 2fT df |  | H ( f ) |2 df   | G ( f )e j 2fT |2 df

|  H ( f )G ( f )e 2fT df |  | H ( f ) | 2
df  | G ( f ) | 2
df

No No
 
2 2
| H ( f ) | df | H ( f ) | df
2 2
  maximum SNR
2 2
| G ( f ) | df 2 | G ( f ) | df
SNR   according to Schwarz
No No inequality
2
Deriving the matched filter (8/8)
 Inverse transform
 Assume g(t) is real. This means g(t)=g*(t)
 If F  g (t )  G  f 
F  g * (t )  G *   f 
then G  f   G   f  for real signal g(t)
*

G *  f   G   f  through duality
 Find h(t) (inverse transform of H(f))
h(t )  k  G   f  e  j 2fT e j 2ft df
 k  G   f  e  j 2f  T t  df h(t) is the time-reversed and
delayed version of the input
 k  G  f  e j 2f  T t  df signal g(t).

h t   kg  T  t  It is “matched” to the input signal.


What is a correlation detector? (1/1)
 A practical realization of the Detector
optimum receiver is the
correlation detector.
 The detector part of the
receiver consists of a bank of
M product-integrators or
correlators, with a set of
orthonormal basis functions,
that operates on the received
signal x(t) to produce the
observation vector x.
 The signal transmission
decoder is modeled as a
maximum-likelihood decoder
that operates on the
observation vector x to
produce an estimate, m̂ . Signal Transmission Decoder
The equivalence of correlation and
matched filter receivers (1/3)
 We can also use a corresponding set of matched filters
to build the detector.
 To demonstrate the equivalence of a correlator and a
matched filter, consider a LTI filter with impulse response
hj(t).
 With the received signal x(t) used as the filter output, the
resulting filter output, yj(t), is defined by the convolution
integral:

y j t   x h  t    d
j

The equivalence of correlation and
matched filter receivers (2/3)
 From the definition of the
matched filter, we can
incorporate the impulse
hj(t) and the input signal hj  t    j T  t 
φj(t) so that: 

 Then, the output y j (t )   x    T  t    d



j

becomes: 
 Sampling at t = T, we get: y j t   x        d
j

The equivalence of correlation and
matched filter receivers (3/3)
Matched filters
 So we can see that the
detector part of the
receiver may be
implemented using either
matched filters or
correlators. The output of
each correlator is Correlators
equivalent to the output of
a corresponding matched
filter when sampled at t =
T.
Maximum Likelihood Receiver
• The transmitter sends one of M signals
si(t), for i=1,2,…,M
• The M signals forms a constellation in the
signaling space
s1
s5
s2
s3

s6
s7 s4
• The received signal x(t) = si(t) + n(t) is
decomposed to its components in the
signal space.

X ∫ xi=si1+n1

(t)
X x2=si2+n2

ŝi(t)
(t) Decision
xt)
Rule

xN=siN+nN
X ∫
(t)

E xj   s  En   s
ij j ij

var x   var n  
N 0
j j
2
1   x  sij  2

f X j  x  exp   
N 0 
 N0 

Since xi are independent Gaussian distributed random variable


their joint density function is given by:

f X |sj  x1 , x2 ,..., x N

   N  N 2

exp   
N  xi  s ji 
2

| sj 0 

 i 1 N0 

An ML receiver selects sj that maximize fX|sj.


Define:
 N 0  N 2

exp   
N  xi  s ji 
2

 To be the likelihood

 i 1 N0 

function
Maximizing the likelihood function is equivalent to minimizing the
quantity
 2

   x 
N
 
ˆ j  min
s  d x ,sj i  s ji
sj
i 1
The maximum Likelihood receiver picks the signal that is closed to the
received signal in the signal space

s1
s5
x s2
dmin s3

s6
s7 s4
EECE 477/545

Communication System II
(Digital Communications)

Dr. X. Li

Lecture 6: Section 3.1,3.2


(Matched Filter, ML Detector)
September 11, 2008
Review: we have known
• Baseband demodulation
– General procedure, system model, signal model
– Signal space representation of signals & noise
– Matched filter as the optimal demodulation filter
• Optimize output SNR
• To study today:
– Matched filter (MF): signal-space MF design
– Optimal detector
• Note: book’s approach is limited to binary case. Our
description include more general cases. But the final results
are identical in binary case.
IV. Matched filter: signal space representation

1) Time-domain
N signal & Signal space representation
si (t )   j 1 aij j (t ) si  (ai1 , , aiN )
r (t )  si (t )  n(t ) r  si  n  (ai1 , , aiN )  (n1 , , nN )
{ j (t )}


2) Use basis as matched filters

...
 1 (T  t )

Sample at
...
r1  ai1  n1

r
r (t ) t=T
 N (T  t ) rN  aiN  nN
• Why does the figure give the matched filter output?

Consider the j th matched filter h j (t )   j (T  t ),



yi (t )  h j (t )  r (t )   
r ( ) h j (t  ) d
 
 ri  yi (T )   
r ( ) h j (T  )d   
r ( ) j ( )d
 
= si ( ) j ( )d   n( ) j ( ) d
 
 ri  aij  ni
3) Convenience of using basis as matched filter
• Matched filter & sampler becomes transparent: when
input is signal space vector, output just equals to that
signal space vector
• Matched filters are normalized, so noise components
have variance (power) N0/2 only.
5) Example 6.1 (Same problem as Example 5.1-5.3,
but do it in signal space): Binary PAM transmission
(100 bps, A=1 volt). AWGN with PSD 10-3 W/Hz.
Find the sampling values after demodulation and
the SNR.
3.2.1 Maximum Likelihood Detector
I. General introduction
1) Where is the detector

symbols
MF Detector
r(t) z (T )  r sˆi or bits

2) What is a detector?
• Detector is a decision-making device that makes decisions of the
transmitted symbols based on the received signal samples (or,
map PAM samples z(T) into symbols or binary digits).
3) How does a detector work?
• Detector works based on the statistical property of the received
samples and the transmitted symbols
• Detector is important because of random noise
• Objective is to minimize decision error probability
II. Maximum likelihood detector
– Maximum likelihood detector is a detector that
minimizes decision error probability by choosing
the symbols that most likely produce the
received samples.
• “Most likely”: quantified by likelihood function
– Likelihood function for si: the probability of the
received samples when the transmitted symbol
is si
• Example, if tossing a die, what is the likelihood of
obtain “1”, etc?
• In our case, if 1 is transmitted, what is the likelihood
that 0.5 or -.4 is received (due to noise)?
III. Binary PAM transmission case
1) Input to the detector (use signal space
representation)
simplify notation
z (T )  ai (T )  n0 (T ) 
 z  ai  n0
where ai (  Eb ), i  1, 2, is determined by symbol si
N0
n0 noise: Gaussian, zero mean, variance  02 
2

Be careful: some slight abuse of notations:


si denotes symbol, ai denotes signal space value of si
n or n0 both denote AWGN signal space value,
sometimes y sometimes z is used as MF output
2) Distribution of z can be written as a pdf conditioned
on the transmitted symbols s1 and s2
• The meaning is: if s1 is transmitted, then the probability of
having sampling value z is p(z| a1)
• Obviously, if s1 is sent, then z more likely lies around a1. If s2
is sent, then z more likely lies around a2.
• If the actually received value z is near a1, then we would
better say that s1 is sent. Otherwise, we say s2 is sent.
( z  ai ) 2

1 2 0 2
p( z ai )  e ,
 0 2
z  ai +n0 , i  1, 2,
likelihood function
3) Such idea is mathematically defined as
• With the received sample z, the decision is made in favor
of si such that p(z|ai) is the largest
• this gives the maximum likelihood detection rule

sˆ  arg max p ( z ai )
 ai 
• Example 6.2. {s1=0,s2=1} produce samples {a1=0.1,a2=-
0.1}, N0/2=10-3. If the received sample z=0.05. What is
the best decision?
4) Maximum likelihood decision rule can be further
simplified to minimum distance decision rule
1
ln p ( z | ai )   ( z  ai )2  ln( 0 2 )
2 02
 arg max p( z ai ) is equivalent to arg min ( z  ai ) 2
 ai   ai 

• Binary minimum distance decision rule


if ( z  a )2  ( z  a )2 , choose a (or s )
1 2 1 1
 2 2
if ( z  a1 )  ( z  a2 ) , choose a2 (or s2 )

• Example 6.3. Same as Example 6.2. For z=.05, then we


have (z-a1)2=(.05-.1)2=.0025, while (z-a2)2=(.05+.1)2=1.1025.
So choose a1 (or s1).
5) Explanation of binary minimum-distance decision
rule in signal space
a a
With decision threshold   1 2 ,
2
if z   , choose a1 (or s1 )

if z   , choose a2 (or s2 )

• For binary PAM, since

a1  a2  Eb , we have   0
IV. Extend Binary Decision Rule to M-ary
1) M-ary PAM: there are M different symbols si,
which give M different ai, and hence M likelihood
functions p(z|ai).
• Minimum distance decision rule: find the signal point ai
that is closest to the received value z.
• Exercise 6.4. Determine the decision rule for M=4
PAM.
2) QPSK
• We have 2-dimensional signal space, each sampling
value z or signal point ai is a 2-dimensional vector.
• Minimum distance decision rule

Find ai with min || z  ai ||2  ( z1  ai1 ) 2  ( z2  ai 2 ) 2

• In signal space plane, each symbol has a decision


region. Samples z falling in this region are decided in
favor of this symbol
V. Now we know maximum likelihood detector
is equivalent to minimum distance detector
– Very simple decision rule in signal space, just
compare the distances from the received
sample to all the signal points
– It is optimal for equally probable symbols only
– Otherwise, it is only sub-optimal
– Some special cases
• Binary PAM, M-ary PAM, and QPSK decision rules
– But how optimal is it? We need a metric for
evaluation
Conclusions
• More on Matched filter
– Signal space representation (We like it more!)
• Maximum likelihood detector
– What is it?
– Can be simplified to “minimum distance detector”,
especially useful in signal space
– Decision rule: binary PAM case
– Decision rule: M-ary PAM or QPSK cases
http://www.mrs.umn.edu/~sungurea/introstat/history/w98/Bayes.html

and
maximum
likelihood

Xiaoan Wu
Thomas Bayes
Feb 16, 2005
(1702-1761)
Outline
• An example: count # of fish in a pond
• Bayes’ Theorem and maximum likelihood
• Another example: Quasar selection in SDSS
• Maximum likelihood estimators: consistent
and unbiased?
• Minimum variance bound: error estimation
• Relation between MLE and least-squares
fitting
• My research work: mass distribution of M87
An example: # of fish in a
pond
Q: There are more than 10,000 fish in a pond. How to
estimate the number of a fish if you are the
manager of the pond?
A: 6 steps:
1) catch 1000 fish
2) mark them and put them back to the pond
3) catch another 1000 fish in a few days and see
how many have marks on them, say, 10.
4) The fraction of marked fish: 10/1000=1/100
5) The number of fish = 1000/(1/100)=100, 000
6) 1-σerror = 35, 000 (Maximum Likelihood)
Bayes’ theorem
P ( A  B )  P ( A | B ) P( B)  P( B | A) P ( A)
P( A | B) P( B)
P ( B | A) 
P ( A)
A : observation
Br : hypotheses, e.g. parameters
H : prior information
P ( Br | A, H )  P ( Br | H ) P ( A | Br , H )  P( Br | H ) L( A | Br , H )
"What you know about Br after the data A arrive is what you knew
before [P ( Br | H )] and what the data told you [L( A | Br , H )]"
Posterior  likelihood  prior (Mackay)
Maximum likelihood
Bayes' Theoem: P( Br | A, H )  P( Br | H ) L( A | Br , H )
1
Maximum Likelihood: P ( Br | H )  in case we have no prior
N
information, where N is the total number of hypotheses.

Theory Example
Br Fish # : r
Observation A: 10 marked and 990 unmarked
Prior information H : none
P( Br | H ) constant
1000   r  1000 
   
 10   990 
L( A | Br , H )
 r 
 
 1000 
1-σerror = 35, 000
Another example: Quasar selection in
SDSS
Richards et al., 2002, AJ, 123, 2945

Goal: find quasar candidates from SDSS photometric plates


Requirements: Efficiency, completeness

Prior information:
1. Quasars are point sources
2. Efficiency is poorer in the galactic plane due to contamination
from stars.
3. Quasars and stars occupy different regions in color-color space

Result: completeness: 90%, efficiency: 65%


Quasar selection in SDSS

P ( Br | A, H )  P ( Br | H ) L( A | Br , H )
Br : whether an object is quasar
H : all prior information we can have
A: observed in photometric plates
Maximum likelihood estimation

f ( x;1 ,  2 , ,  k )
 : unkown constant parameters, {1 ,  2 , ,  k }
X : N independent observations, {x1 , x2 , , xN }
N
L( X | )   f ( xi ; ), N-dim p.d.f of IIDs.
i 1
N
  ln L   ln f ( xi ; )
i 1


0

Maximum likelihood estimators
( x   )2
1 
f ( x)  e 2
2
X : N independent observations, {x1 , x2 ,  , xN }
( xi   )2
1 
L( X |  ,  )   e 
2
2
N 1  xi   
  ln L   ln(2 )  N ln     
2 2   
 1
 2  ( xi   )  0
 
1 1
   xi    ( xi   ) 2
2

N N
X={10, 20, 30, 40, 50}
Mean of estimators: consistency and bias

Consistency: N  , | t   | 0
N -a
if t is consistent, t is also consistent
N -b
Unbiased: E (t )  
Estimators:
1 1 1
t 
N
 xi , t 2  N
 i
( x   ) 2
, t 
 2 
N 1
 i
( x   ) 2

1
E (t )  E (
N
 xi )  , consistent and unbiased

( N  1) 2
E (t 2 )   , consistent but biased
N
E (t  2 )   2 , consistent and unbiased
Variance of estimators: Minimum
variance bound
1
Var t  
 2 ln L
E( )
 2

 ln L
equality holds if t is a linear function of .

1
For example, t   xi is a MVB estimator
N
1
t 2   ( xi   ) 2 is a MVB estimator for  2 ,
N
but not a MVB estimator for 
In general, maximum likelihood estimators are biased and not
MVB estimators, thus errors should be obtained by bootstrap
estimation. If N  , the estimators become consistent,
unbiased and minimum variance estimators.
Relation between MLE and least-
squares fitting

A function y  f ( x,  )
{ yi  f ( xi )   yi }
2
N 1  yi  f ( xi ,  ) 
  ln L   ln(2 )   ln  i    
2 2  i 
2
 yi  f ( xi ,  ) 
maximize   maximize     2

  i 
The same as least-squares procedure.
Test of hypotheses

• Likelihood ratio
• Kolgomorov-Smirnov test
My research work: mass distribution of M87
Summary of MLEs
Pros
• No data binning, all observed information used.
• They become unbiased minimum variance estimators
as the sample size increases.
• They can generate confidence bounds.
• Likelihood functions can be used to test hypotheses
about models and parameters. 

Cons
• MLEs can be heavily biased.
• Calculating MLEs is often computationally
expensive.
Conclusions

If we know prior information, we


maximize the posterior
probability.
If we do not know any prior
information, we maximize the
likelihood.
EE345S Real-Time Digital Signal Processing Lab Fall 2008

Matched Filtering and Digital


Pulse Amplitude Modulation (PAM)
Slides by Prof. Brian L. Evans and Dr. Serene Banerjee
Dept. of Electrical and Computer Engineering
The University of Texas at Austin

Lecture 13
Outline
• Transmitting one bit at a time
• Matched filtering
• PAM system
• Intersymbol interference
• Communication performance
Bit error probability for binary signals
Symbol error probability for M-ary (multilevel) signals
• Eye diagram

13 - 58
Transmitting One Bit
• Transmission on communication channels is analog
• One way to transmit digital information is called
2-level digital pulse amplitude modulation (PAM)
x0 (t ) y0 (t ) receiv
‘0’ bit e ‘0’ bit
b input output b
Additive Noise
t Channel
x(t) y(t)
-A -A

x1 (t ) y1 (t ) receiv
How does the receiver e‘1’ bit
A decide which bit was A
sent?
b t b t

‘1’ bit 13 - 59
Transmitting One Bit
• Two-level digital pulse amplitude modulation over
channel that has memory but does not add noise
x0 (t ) y0 (t ) receiv
‘0’ bit e ‘0’ bit
b input output h h+b
Communicatio t
t n
x(t) y(t) -A
-A Channel
Th
x1 (t ) h(t ) y1 (t ) receiv
Model channel e‘1’ bit
A as LTI system 1 A Th
with impulse
t
b t response h(t) h t h h+b
‘1’ bit Assume that Th < Tb
13 - 60
Transmitting Two Bits (Interference)
• Transmitting two bits (pulses) back-to-back
will cause overlap (interference) at the receiver
x(t ) h(t )
y (t )
A
* 1 =
b h+b

b t h t b t
-A
Th
‘1’ bit ‘0’ bit Assume that Th < Tb ‘1’ bit ‘0’ bit

• Sample y(t) at Tb, 2 Tb, …, and


Intersymbol
threshold with threshold of zero interference
• How do we prevent intersymbol
interference (ISI) at the receiver? 13 - 61
Preventing ISI at Receiver
• Option #1: wait Th seconds between pulses in
transmitter (called guard period or guard interval)
x(t ) h(t )
y (t )
A
* 1 =
h+b h+b

b t h t h b t
-A
Th
‘1’ bit ‘0’ bit Assume that Th < Tb ‘1’ bit ‘0’ bit

Disadvantages?
• Option #2: use channel equalizer in receiver
FIR filter designed via training sequences sent by transmitter
Design goal: cascade of channel memory and channel
equalizer should give all-pass frequency response
13 - 62
Digital 2-level PAM System
ak{-A,A} s(t) x(t) y(t) y(ti)

bi Decisio 1
PAM g(t) h(t)  c(t)
Sample at
n
bits
Maker 0
t=iTb
pulse AWGN matche
Clock Tb Threshold 
shape w(t) d filter Clock T
r
b
N0 p 
opt  ln 0 
4 ATb  p1 
Transmitter Channel Receiver

• Transmitted signal s (t )  a k
k g (t  k Tb )
• Requires synchronization of clocks between
transmitter and receiver
13 - 63
Matched Filter
• Detection of pulse in presence of additive noise
Receiver knows what pulse shape it is looking for
Channel memory ignored (assumed compensated by other
means, e.g. channel equalizer in receiver)
g(t) x(t) y(t) y(T) T is the
h(t)
symbol
Pulse t=T period
signal Matched
w(t) filter

Additive white Gaussian noise y (t )  g (t ) * h(t )  w(t ) * h(t )


(AWGN) with zero mean and
variance N0 /2  g 0 (t )  n(t )
13 - 64
Matched Filter Derivation
• Design of matched filter
Maximize signal power i.e. power of g 0 (t )  g (t ) * h(t ) at t = T
Minimize noise i.e. power of n(t )  w(t ) * h(t )
• Combine design criteria
max  , where  is peak pulse SNR
| g 0 (T ) |2 instantaneous power
 2

E{n (t )} average power
g(t) x(t) y(t) y(T)
h(t)

Pulse t=T
signal Matched
w(t) filter
13 - 65
Power Spectra
• Deterministic signal x(t) • Autocorrelation of x(t)
w/ Fourier transform X(f) Rx ( )  x( ) * x* ( )
Power spectrum is square of Maximum value at Rx(0)
absolute value of magnitude
response (phase is ignored) Rx() is even symmetric, i.e.
2
P ( f )  X ( f )  X ( f ) X *( f ) Rx() = Rx(-)
x
x(t)
Multiplication in Fourier 1
domain is convolution in
time domain 0 Ts t
Conjugation in Fourier domain Rx()
is reversal and conjugation Ts
in time
X ( f ) X * ( f )  F  x( ) * x* ( )  -Ts Ts 
13 - 66
Power Spectra
• Power spectrum for signal x(t) is Px ( f )  F  Rx ( ) 
Autocorrelation of random signal n(t)
Rn ( )  E n(t ) n (t   )    n(t ) n* (t   ) dt

*


Rn ( )  E n(t ) n* (t   )    n(t ) n* (t   ) dt  n( ) * n* ( )



For zero-mean Gaussian n(t) with variance 2
 
Rn ( )  E n(t ) n* (t   )   2  ( )  Pn ( f )   2
• Estimate noise power
spectrum in Matlab noise floor
N = 16384; % number of
samples
gaussianNoise = randn(N,1); 13 - 67
plot( abs(fft(gaussianNoise)) .^ 2
Matched Filter Derivation
Noise power
g(t) x(t) y(t) y(T) spectrum SW(f)
h(t)
N0
Pulse t=T
2
signal w(t) Matched filter f
N0
• Noise n(t )  w(t ) * h(t ) S N ( f )  SW ( f ) S H ( f )  | H ( f ) |2
2
 
AWGN Matched
N0 filter
E{ n 2 (t ) }   S N ( f ) df   | H ( f ) | 2
df

2 

• Signal g 0 (t )  g (t ) * h(t ) G0 ( f )  H ( f )G ( f )


j 2
g 0 (t )  H ( f ) G ( f ) e f t
df



j 2
| g 0 (T ) |2  | H ( f ) G ( f ) e fT
df |2 13 - 68

Matched Filter Derivation
• Find h(t) that maximizes pulse peak SNR 

 H ( f ) G( f ) e
j 2 f T
| df |2 a
 

N0

2
| H ( f ) | df
2  
b
• Schwartz’s inequality T
a b
For vectors: | a b |  || a || || b ||  cos 
T *

|| a || || b ||
 2  

  ( x)    ( x)   ( x)
2 2
For functions: 1
*
2 ( x) dx  1 dx 2 dx
- - -

upper bound reached iff 1 ( x)  k 2 ( x) k  R


13 - 69
Matched Filter Derivation
Let 1 ( f )  H ( f ) and 2 ( f )  G * ( f ) e  j 2  f T
  
|  H ( f ) G ( f ) e j 2  f T df |2   | H ( f ) |2 df  | G ( f ) |2 df
-  

|  H ( f ) G ( f ) e j 2  f T df |2 
2
 -

N0


N0  | G ( f ) |2 df

2 

| H ( f ) |2 df 


2
max  
2
| G ( f ) | df , which occurs when
N 0 
H opt ( f )  k G * ( f ) e  j 2  f T  k by Schwartz ' s inequality
Hence, hopt (t )  k g * (T  t )
13 - 70
Matched Filter
• Given transmitter pulse shape g(t) of duration T,
matched filter is given by hopt(t) = k g*(T-t) for all k
Duration and shape of impulse response of the optimal filter is
determined by pulse shape g(t)
hopt(t) is scaled, time-reversed, and shifted version of g(t)
• Optimal filter maximizes peak pulse SNR
 
2 2 2 Eb
max   | G ( f ) | df   | g (t ) | dt   SNR
2 2

N 0  N 0  N0
Does not depend on pulse shape g(t)
Proportional to signal energy (energy per bit) Eb
Inversely proportional to power spectral density of noise13 - 71
Matched Filter for Rectangular Pulse
• Matched filter for causal rectangular pulse has an
impulse response that is a causal rectangular pulse
• Convolve input with rectangular pulse of duration
T sec and sample result at T sec is same as to
First, integrate for T sec
Second, sample at symbol period T sec Sample and
dump
Third, reset integration for next time period
• Integrate and dump circuit

 t=kT T
h(t) = ___ 13 - 72
Digital 2-level PAM System
ak{-A,A} s(t) x(t) y(t) y(ti)

bi Decisio 1
PAM g(t) h(t)  c(t)
Sample at
n
bits
Maker 0
t=iTb
pulse AWGN matche
Clock Tb Threshold 
shape w(t) d filter Clock T
r
b
N0 p 
opt  ln 0 
4 ATb  p1 
Transmitter Channel Receiver

• Transmitted signal s (t )  a k
k g (t  k Tb )
• Requires synchronization of clocks between
transmitter and receiver
13 - 73
Digital 2-level PAM System
• Why is g(t) a pulse and not an impulse?
Otherwise, s(t) would require infinite bandwidth
s (t )   ak  (t  k Tb )
k
Since we cannot send an signal of infinite bandwidth, we limit
its bandwidth by using a pulse shaping filter
• Neglecting noise, would like y(t) = g(t) * h(t) * c(t)
to be a pulse, i.e. y(t) =  p(t) , to eliminate ISI
y (t )    ak p (t  kTb )  n(t ) where n(t )  w(t ) * c(t ) p(t) is
k
centere
 y (ti )   ai p (ti  iTb )    ak p  (i  k )Tb   n(ti ) d at
k ,k i
origin
actual value intersymbol noise
(note that ti = i interference (ISI)
13 - 74
Tb)
Eliminating ISI in PAM
• One choice for P(f) is a  1
rectangular pulse  2 W ,W  f  W
P( f )  
W is the bandwidth of the  0
 , | f | W
system
Inverse Fourier transform
of a rectangular pulse is 1 f
P( f )  rect ( )
is a sinc function 2W 2W
p (t )  sinc(2  W t )
• This is called the Ideal Nyquist Channel
• It is not realizable because pulse shape is not
causal and is infinite in duration
13 - 75
Eliminating ISI in PAM
• Another choice for P(f) is a raised cosine spectrum
 1
 0  | f |  f1
2W

  
 1
P( f )   1  sin  (| f | W )   f1  | f |  2W  f1
  2W  2 f  
 4W   1 
 0 2W  f1  | f |  2W


• Roll-off factor gives bandwidth in excess f1
  1
of bandwidth W for ideal Nyquist channel W
• Raised cosine pulse p(t )  sinc t  cos 2 2 W2 t 2
 T  1  16  W t
has zero ISI when  s

sampled correctly ideal Nyquist dampening


channel impulse adjusted by rolloff
• Let g(t) and c(t) be square root
responseraised cosines
factor 
13 - 76
Bit Error Probability for 2-PAM
• Tb is bit period (bit rate is fb = 1/Tb)
s(t) r(t) r(t) rn s (t )   ak g (t  k Tb )
 h(t) k

Sample
t = at r (t )  s (t )  v(t )
Matche
v(t) d filter nTb r(t) = h(t) * r(t)

v(t) is AWGN with zero mean and variance 2


• Lowpass filtering a Gaussian random process
produces another Gaussian random process
Mean scaled by H(0)
Variance scaled by twice lowpass filter’s bandwidth
• Matched filter’s bandwidth is ½ fb
13 - 77
Bit Error Probability for 2-PAM
• Binary waveform (rectangular pulse shape) is A
over nth bit period nTb < t < (n+1)Tb
• Matched filtering by integrate and dump See slide
13-16
Set gain of matched filter to be 1/Tb
Integrate received signal over period, scale, sample
( n 1)Tb
1 Prn (rn )
rn 
Tb  r (t ) dt
nTb
( n 1)Tb
1 rn
 A
Tb  v(t ) dt
nTb A
- 0
A
  A  vn Probability density function (PDF)
13 - 78
Bit Error Probability for 2-PAM
• Probability of error given that the transmitted
pulse has an amplitude of –A
 vn A 
P(error | s (nTb )   A)  P ( A  vn  0)  P (vn  A)  P  
  
• Random variable vn / 
vn
 is Gaussian with PDF for
N(0, 1)
zero mean and
variance of one 0 A /
 v2
 vn A  1   A
P(error | s(nT )   A)  P     e 2
dv  Q 
    A 2  

Q function on next slide
13 - 79
Q Function
• Q function
1   y2 / 2
Q( x)  e dy
2 x
• Complementary error
function erfc
2  t 2
erfc ( x)   e dt
 x
• Relationship Erfc[x] in
1  x  Mathematica
Q( x)  erfc  erfc(x) in
2  2 Matlab
13 - 80
Bit Error Probability for 2-PAM
• Probability of error given that the transmitted pulse
has an amplitude of A
P (error | s (nTb )  A)  Q( A /  )
• Assume that 0 and 1 are equally likely bits
P(error)  P( A) P(error | s (nTb )  A)  P( A) P (error | s (nTb )   A)
1  A 1  A
 Q   Q   Q    Q 
2 σ 2 σ
 A
 
σ
A2 

where,   SNR  2 e x 2
1 e 2
 erfc( x)   Q(  ) 
• Probablity of error x 2 
decreases exponentially with SNR for large positive x, 
13 - 81
PAM Symbol Error Probability
• Average signal power 3d

E{an2 } 1 E{an2 }
   | GT ( ) | d  Tsym
2
PSignal
Tsym 2 d d

GT() is square root of the -d -d


raised cosine spectrum
Normalization by Tsym will -3 d
be removed in lecture 15 slides 2-PAM 4-PAM

• M-level PAM amplitudes Constellations with


M M decision boundaries
li  d (2i  1), i  1, . . . , 0, . . . ,
2 2
• Assuming each symbol is equally likely
 M

1  1 M 2 1  2 2
2 d2
PSignal    li      d (2i  1)   ( M  1)
2

Tsym  M i 1  T  M i 1  3Tsym 13 - 82
 
PAM Symbol Error Probability
• Noise power and SNR • Consider M-2 inner
1
 sym / 2
N0 N0 levels in constellation
PNoise 
2 

 /2
2
d 
2Tsym Error if and only if
sym

| vR (nTsym ) | d
two-sided power
spectral density of where  2  N 0 / 2
AWGN
PSignal 2( M 2  1) d 2 Probablity of error is
SNR   
PNoise 3 N0 d
P (| vR (nTsym ) | d )  2 Q 
 
• Assume ideal channel, • Consider two outer
i.e. one without ISI levels in constellation
x (nTsym )  an  v R ( nTsym ) d
P(vR (nTsym )  d )  Q 
channel noise  
13 - 83
filtered by receiver
and sampled
Optional

Alternate Derivation of Noise Power



v(nT )   g ( )w(nT   )d
r Filtered T = Tsym

noise
 
E v (nT )  E   g r ( ) w(nT   )d 
2
2  
 Noise
   
power
 
 E{   gT ( 1 ) w(nT   1 ) gT ( 2 ) w(nT   2 )d 1d 2 }
 
 
  g
  
T ( 1 ) gT ( 2 ) E{w(nT   1 ) w(nT   2 )}d 1d 2
2 (1–2)
  sym / 2
2 1  2
   g r ( )d  
2 2
 G r ( )d 
2


2  sym / 2
T 13 - 84
PAM Symbol Error Probability
• Assuming that each symbol is equally likely,
symbol error probability for M-level PAM
M 2 d  2  d  2( M  1)  d 
Pe   2 Q 
  Q   Q 
M    M    M  

M-2 interior points 2 exterior


points
• Symbol error probability in terms of SNR
 1
 PSignal
M 1   3
Pe  2 Q  2
2 
SNR   since SNR  
PNoise 3
d2
M 2

1 
  M 1
2
M  
 

13 - 85
Visualizing ISI
• Eye diagram is empirical measure of signal quality
 

  g (nTsym  kTsym ) 
x(n)   ak g (n Tsym  k Tsym )  g (0)  an   ak 
k    k   g ( 0 ) 
 k  n 
• Intersymbol interference (ISI):
 g (nTsym  kTsym )  g (kTsym )
D  ( M  1) d 
k  , k  n g (0)
 ( M  1) d 
k  , k  n g (0)

• Raised cosine filter has zero


ISI when correctly sampled
• See slides 13-31 and 13-32 13 - 86
Eye Diagram for 2-PAM
• Useful for PAM transmitter and receiver analysis
and troubleshooting
Sampling instant

M=2
Margin over noise
Slope Distortion over
indicates zero crossing
sensitivity to
timing error Interval over which it can be sampled
t - Tsym t t + Tsym

• The more open the eye, the better the reception


13 - 87
Eye Diagram for 4-PAM
Due to startup
3d transients.
Fix is to
discard first
d few symbols
equal to
number of
symbol periods
in pulse shape.
-d

-3d

13 - 88
Optimum Receiver Design

ENSC 428 – Spring 2007


Digital Communication System
What is a Design Problem ?



Noise Model
Equivalent Vector Channel Model
cont …

 Theorem of Irrelevance:

{ rk |1 £ k £ K } form sufficient statistics!


cont …

 




MAP Optimum Decision Rule



ML Decision Rule

Evaluation of Probabilities




cont …
cont …

 
cont …
Optimum Receiver Structure








cont …




cont …


Key Observations
Optimum Correlation Receiver
Simplified Receiver Cases





cont …


cont …
Matched Filter Receiver
cont …


Example : Receiver Design
cont …
cont …
cont …
cont …
cont …
Key Facts
The Ubiquitous
MATCHED FILTER
. . . it’s everywhere!!

an evening with a very important


principle that’s finding exciting
new applications in modern radar

R. T. Hill AES Society


an IEEE Lecturer Dallas Chapter
25 September 2007

119
How do receivers work?
A brief review, then, of

network theory

characterizing a receiver by its


“impulse response function”

representing radio signals

reminding ourselves of “convolution”


in linear systems . .

Wow! . . all in twenty minutes or so!

Well, first we’ll need a receiver block diagram 

120
“characterizing” a receiver (or generally, a network)
by its “impulse response function”

Why?
121
Do you see how we can represent any signal s(t) as a
collection of impulses . .?

. . the impulse is a wonderful function, so useful –


I’ll make some comments about it.

122
Also, our signal (certainly in radio/radar work)
is surely bipolar-cyclic . . a modulated carrier

. . that is, we can think of it, in radio work, as a sine wave


of voltage field intensity and polarity, leaving all the relationships
to its accompanying magnetic field and the medium at hand
“to Maxwell”, so to speak!

Do you see now


how impulses
could still be used
to represent even
this complex radio
signal?

123
+

Now, we see here that the output is indeed the sum


of the many impulse response functions, weighted and
translated by the input signal . . that the action of this
linear network is, by superposition, a convolution!

124
About designing our receiver . .

What impulse response function


do I want??

Well, what do we want our


receiver to do . . produce a
replica of our signal? NO!!
. . . discuss . . .

125
For maximum sensitivity to our own signal’s being at
the input, we don’t need to see a copy of it at the output . .
. . we simply need, at the output, the greatest possible
indication of its presence at the input.

In other words, we need to have an impulse response


function that, when convolved with our signal, would
give the greatest possible “signal to noise ratio” at the
output.

Oh, yes . . did I neglect to mention “noise”?


126
Yes, noise . . always present in radio work . .
Admitting, then, that our total input is, say, “gi(t)”,
made up of our desired signal AND accompanying
(but wholly independent) noise, we see that we can
treat these two inputs separately as they pass through
our receiver:

and our output is just the sum of the two convolutions – one due
to the signal being present and one from the always-present noise.

Oh, yes , , looks complicated, but nothing new here . . just


remember the words here! Let the math “talk” to you!
127
So, we might ask . .
. . WHAT Impulse Response Function
would produce the greatest possible indication, at the
output, of our signal’s arrival at the input?

Answer . .
. . various methods (differential calculus; the “Schwartz
inequality”) lead to the conclusion that maximum sensitivity
is achieved when the IRF is the complex conjugate of the
subject signal:

Concepts:
► Signal to Noise Ratio (SNR) as a measure of sensitivity
► Representing our cyclic signal in this “A ejθ “ form
and, note that some details of necessary time
displacement notation are overlooked here 

128
This, then, is

The Matched Filter

A receiver the impulse response function of which is


the complex conjugate of a particular signal will produce
the greatest possible signal-to-noise ratio at the output
when that signal is at the input and in the presence of
independent and completely random noise . . . this
receiver is the most “sensitive” to the particular signal . .
. . it is “matched” to it.

129
Representing our signal by a rotating vector . .
. . a great convenience

. . a vector rotating at the carrier frequency, amplitude modulated


by s(t) with possible phase modulation (a binary phase coded signal,
for example) shown as Φ(t) . . do you know, or remember, this
convention? Sure helps in diagramming a lot of things in today’s
signal processing.

130
Some discussion . . to improve our understanding

. . consider a four-segment amplitude- and phase-modulated signal,


and for the moment, without noise . .

. . something similar to a child’s


scattering the blocks with which he had
made a tower . . what if we wanted to
see that maximum height (rebuild the
tower) again? I would re-align the blocks
by multiplying each by its conjugate
(remember: angles add when vectors
are multiplied) and – voila! – the tower
appears again, maximum possible height!
Is “angle” then enough? The conjugate involves the amplitude . .why?

131
In conjugating the phase modulation of our signal, why multiply
in the convolution by the amplitude as well?

Ah . . we cannot ignore the noise!

The circles here show an expectation of the noise contribution.


Our input signal gi(t) is our own signal abcd and this noise . . but
notice, the noise is (of course) of the same strength regardless of
the amplitude of abcd at that time. Also note, the noise is completely
random. This utter randomness and independence of abcd are properties
of “Gaussian” noise, “white” noise, as from natural thermal phenomena.
Now, to convolve with, say, a unit level phase-only conjugate would
exaggerate some of the noise effects in the angle-corrected vector
addition – unwise. The best thing for us to do, in such noise, is
indeed to ignore it! “Match” to our signal alone!
132
OK . . but just one further thought . .

Remember the child’s block tower? Consider:

The child’s playroom is subject to a mild earthquake


(good grief!) as the blocks are tumbled in the way
we expected.
“White” noise . . we’re OK . . use the matched filter

On the other hand, what if a “wind” had been blowing


distinctly from, say, the west as the blocks were tumbled
in addition to the earthquake’s vibratory behavior?

Not random!! Biased! We’d better compensate


for that, assuming we can sense it. That is,
we may wish to use a “whitening” filter to
randomize the disturbance before the matched filter!

That idea is indeed “key” to much of the adaptive signal processing


so strong in today’s radar literature . . . more about that to come

133
Where do we use
the Matched Filter?
Some illustrations . .
Pulse compression, very common in radar . .

Receive beam steering, direction finding in antennas,


the “adaptive antenna” . .

Space-Time Adaptive Processing (STAP) in radar . .

Polarimetry in radar, adaptive processing, target recognition . .

134
Illustration # 1 . . Pulse Compression

► First, some remarks about pulse compression


in modern radar . . fine range resolution desired,
but still with long pulse for lots of energy

► Achieved by modulating the transmitted pulse,


then “compressing” the pulse on receive
with (of course) a matched filter

► Techniques – binary phase coding widely used;


typical lengths of hundreds to one – our
example here? A mere four to one!

135
 Binary phase coding
 with a “tapped delay line”
first bit out

Showing a 180o 
phase shift on one tap,
giving the binary code sequence “ + - + + “ , one of the Barker codes.

On receive (after down conversion), the signal is sent through


its Matched Filter . . in this case, the same circuit with the taps reversed:

+ + - +


Clearly, pulse compression is a “convolution” process, and we see the “time” or “range” sidelobes
in the output which, for all the Barker binary codes, are never more than unity value, while the narrow
main peak is full value, the number of bits in the code. In this matched situation, the output is the

“autocorrelation function”, and a low sidelobe level is a very desirable attribute of a candidate code.
136
A peculiar thing about binary phase coding
The idea that the “tapped delay line, backwards”
is indeed a conjugating matched filter is not so clear
in binary phase coding . . adding or subtracting 180°
results in the same zero phase for that bit (all bits
then being phase aligned).

Just to illustrate conjugation more clearly, imagine that our


four-segment sequence had been 0°, -30°, 0°, 0° – terrible
autocorrelation function, but it makes our point about phase
“realignment” by conjugation in this convolution process.

Pulse expander Pulse compressor Compressed pulse output (here


on transmit on receive showing rather poor range sidelobes)

137
The Barker codes: sidelobe level
Length 2 + - and + + - 6.0 dB
3 ++- - 9.5
4 + + - + and + + + - - 12.0
5 +++-+ - 14.0
7 +++--+- - 16.9
11 +++---+--+- - 20.8
13 + + + + + - - + + - + - + - 22.3 dB

Modulo 2 adder

Seven-stage shift register

This seven-stage shift register is used to generate a 127-bit binary sequence


that can in turn be used to control a (0,180 o) phase shifter through which our
IF signal is passed in the pulse modulator of our waveform generator. Such
shift-register generators produce sequences of length 2 N – 1 (before repeating;
N is the number of stages). Today, computer programs generate the modulation,
storing sequences known to have good autocorrelation functions, for many lengths
other than just 2Int - 1.

138
Illustration # 2 . . Antennas; the “Adaptive” Array

► First, some remarks about how antennas form receive


beams, phased arrays the simplest and very
pertinent illustration

► Next, we’ll observe that compensating for the “angle


of arrival” for an echo is indeed a form of (you guessed it)
a matched filter, this one in “angle space”

► Then, we’ll consider (again in angle space) that the “noise”


may NOT be utterly random, not “white” (statistically
uniform) in angle . . we may need a “whitening” filter
before our matched filter 

139
rst, consider a few discrete elements of a phased array, a line array . .

These simple sketches


will remind us of how
antennas, phased arrays
specifically, perform beam
steering . . a matter of
compensating for the
element-to-element phase
difference resulting from
the path-length differences
associated with the desired
beam-steering angle (the
angle-of-arrival “under test”,
so to speak)

140
. . continuing . .

Can you see how the


“compensating” phase
control in the phased
array is acting as a
conjugating matched
filter? Here, the “segments”
of our “signal” are NOT
a function of time (as before
discussed in signal processing)
but rather a function of space,
the position of each element
of the array. The same Matched Filter principle applies, but here in
a different “dimensional space” than normally considered in matched
filter teaching.

But what was the other part of the principle?


Ah . . that the “background noise” be independent and random . .
. . necessary condition for the Matched Filter to give
best possible output (here, angle measurement accuracy).

Is our noise stationary in angle, uniform statistically??? Perhaps not!! 

141
The “Adaptive Antenna” . .

Discussion

► Spatial analysis analogous


to “spectral analysis”

► Finding compensating weights


for each element involves
solving as many simultaneous
algebraic equations . . inverting
the covariance matrix NOT EASY

► Adapted pattern will be “inverse” to the


angular distribution of noise, “whitening” it Array signal processing . . first, spatial analysis,
then compensation to “whiten”
► Today’s art state . . 16 DOF rather standard . .

Bottom line – the coherent sidelobe cancellers (CSLC)


– the more elaborate “adaptive phased arrays”
are forms of spatial “whitening filters”, here to “whiten”
the heterogeneous disturbance – noise – in angle. Why? So that
a straightforward angle matched filter can be used most effectively.

142
Illustration # 3 . . Space-Time Adaptive Processing, STAP

► Adaptive antenna processing is Space-Adaptive. What is


meant by Space-Time Adaptive Processing?

► A few remarks about Doppler processing in radar, itself


an application of the Matched Filter . . Doppler filters
are indeed filters matched to a particular Doppler shift

► Many radars, airborne ones particularly, need to do Doppler


processing when the background (noise, continuous ground
clutter) is certainly NOT spectrally uniform . . once again,
we’ll need a “whitening” filter

143
Doppler filtering
Theory View a single Doppler “filter” as a classic “Matched Filter”,
that is, we multiply (convolve) the input signal
with the conjugate of the signal being sought.

sample #1 2 3 4

signal

reference

product

Recall, phase angles add when complex numbers (vectors) are multiplied – that is,
the signal is “rotated back” in phase by the amount it might have been progressing
in phase . . To the extent that such a component was in the input signal will we get
an output in this particular filter. We’ve built a Matched Filter for that component
(that frequency component) alone. HOWEVER, this is best ONLY IF background
noise is utterly random in Doppler frequencies . . 

144
The airborne radar situation . . for discussion
Broadband interference
(jamming) suggests need
for adaptive antenna

Terrain features
contribute to
non-uniform spectrum
of the side-lobe coupled
ground clutter

An airborne radar

Do you see the need for “whitening” the background


in BOTH the angle dimension (the broadband interference
is purely angle dependent) and in Doppler shift (the ground
reflectivity may NOT be utterly random, uniformly distributed in angle)?

145
STAP – to be adaptive in both the antenna’s pattern (as before discussed)
and also in the weights to put on each pulse return to shape the
Doppler filters in spectrum (compensating for the non-uniform
spectrum of the background clutter)

Space

Time The “data field”


available to us

To adapt to the background’s sensed heterogeneity in both angle and spectrum,


we must solve (to be “fully adaptive”) MN simultaneous equations (size of the covariance
matrix to invert: MN x MN. No wonder, then, today’s literature is full of STAP papers
addressing ways to “reduce the dimensionality” of the processing, find the best that
we can do in “partial adaptivity”! Very exciting work!

146
Illustration # 4 . . The Polarimetric Matched Filter

► First, a short general review of polarimetry in radar,


its uses, its value

► Then, an example of the Polarimetric Whitening Filter


and how a polarimetric radar image (by SAR)
is improved just from PWF application to the area clutter

► Of course, the “whitening” to randomize the polarization state


of the surrounding area (local clutter in a scene) permits us
then to search for targets (building, vehicles) the
“polarimetric signature” of which may have been
estimated in advance.

147
Radar Polarimetry . . a little review
► Polarization of an Electro-Magnetic wave is taken as the
spatial orientation of the E-field . . most, but certainly
not all, radars are designed to operate, for various
reasons, in either horizontal or vertical (linear)
polarization, fixed by the antenna design – that is,
they are not “polarimetric”

► A “Fully Polarimetric Radar” (FPR) can, first, transmit one


polarization and separately measure the received
signal in each of two orthogonal polarizations, then
do the same, transmitting the orthogonal polarization
(e.g., transmit H, receive H and V;
then transmit V, receive H and V)

► We learn a lot about a target by sensing its polarimetric scattering

► Developed well by the meteorological radar community, some


other specialty radars
148
Polarimetry used for image enhancement

“Whitening” and “Matching” filters

► The work under Dr. Les Novak (MIT/Lincoln Laboratories) in the 1990s is extremely
valuable in establishing these approaches to image enhancement by polarimetry.
A number of papers in our conferences (to be cited here) and other teaching material
he has provided me contribute to this instruction. An airborne SAR at 33 GHz,
fully polarimetric, was used in many valuable experiments there.

► Review . . Detection of things of interest (targets) in the presence of return


not of interest (noise, clutter) requires contrast between the two in some
observable dimension space (here, our image).

► The idea of “whitening” and “matching” is universal, forms matched filter theory.

● The whitening filter: attempts to minimize the “speckle” of the background,


– that is, the standard deviation among the pixels of the clutter – in
images formed by combining the complex images in HH, HV and VV
using complex weights among them, weights that minimize the
correlation in cluttered regions among the three images. The weights
are based on knowledge of the clutter covariance matrix, a priori in the
Novak work reviewed here, and involved its inversion, not difficult with
order three. With such PWF, cells that do not “belong” to the clutter will
have increased contrast with the background and are more easily seen.

● The matched filter: attempts to maximize the target intensity to clutter intensity
in the combined image, by using weights based on knowledge (estimates)
of the polarimetric covariance of target AND clutter returns.

149
(Polarimetry and image enhancement, cont.)

. . from the Novak, Lincoln Laboratory work on


PWF . . a dual-power-line scene, images by
the 33 GHz fully polarimetric airborne SAR.

The histograms show the increased contrast


(separation of the clutter and towers compilations)
afforded by PWF processing compared to a
non-polarimetric image, here the HH.

One can see the visible effect in the two images


above, HH on the left, PWF-processed at right.

All here with 1 foot x 1 foot resolution.


150
Well . . did we make it to this concluding slide??

The Matched Filter

● The conjugate impulse response function –


max sensitivity to a signal
in the presence of white noise

● Normally taught in the context of just temporal signal


processing . . functions of time, etc

● Should be no less seen by students of radar as the


underlying principle to many advances, in
“other” dimension spaces: angle (antenna
patterns), spectral analysis (Doppler filtering),
polarimetric analysis (as in synthetic aperture
radar image enhancement)

● Today’s “adaptive” processes are generally the MF-related


“whitening” required in non-random environments

151
The End
More than you wanted to know about

The Matched Filter

152

You might also like