DC Digital Communication PART6
DC Digital Communication PART6
| g o (T ) |2 | g o (T ) |2
SNR
n
2 2
E n (t )
where |go(T)|2 is the instantaneous power of the filtered signal, g(t) at
point t = T, and σn2 is the variance of the white gaussian zero mean
filtered noise.
Deriving the matched filter (3/8)
We sampled at t = T because that gives you the max
power of the filtered signal.
Examine go(t):
Fourier transform
Go ( f ) G ( f ) H ( f )
so g o (t ) G ( f ) H ( f )e j 2ft df
then | g o (t ) |2 | G ( f ) H ( f )e j 2ft df |2
Deriving the matched filter (4/8)
Examine σn2:
n 2 E n t 2 E n t but this is zero mean so
E n t
n
2 2
and recall that
E n t var n t R 0 autocorrelation at 0
2
n
Rn S n f e j 2f
df autocorrelation is inverse
Fourier transform of power
Rn 0 S f 1 df
n
spectral density
Deriving the matched filter (5/8)
Recall:
filter
SX(f) H(f) SX(f)|H(f)|2 = SY(f)
No
In this case, SX(f) is PSD of white gaussian noise, S X ( f )
2
Since Sn(f) is our output:
No
Sn ( f ) | H f |2
2
n E n(t ) 2 Rn 0
No N
| H f |2 df o | H f |2 df so
2
2 2
| H ( f )G ( f )e j 2fT df |
SNR
No
| H f | 2
df
2
Deriving the matched filter (6/8)
To maximize, use Schwartz Inequality.
| x
1 dx
| 2 Requirements: In this
case, they must be finite
| x
2 dx | 2 signals.
| 1 x 2 x dx |2 | 1 x |2 dx | 2 x |2 dx
This equality holds if φ1(x) =k φ2*(x).
Deriving the matched filter (7/8)
We pick φ1(x)=H(f) and φ2(x)=G(f)ej2πfT and want to make the
numerator of SNR to be large as possible
| H ( f )G ( f )e 2fT df | | H ( f ) |2 df | G ( f )e j 2fT |2 df
| H ( f )G ( f )e 2fT df | | H ( f ) | 2
df | G ( f ) | 2
df
No No
2 2
| H ( f ) | df | H ( f ) | df
2 2
maximum SNR
2 2
| G ( f ) | df 2 | G ( f ) | df
SNR according to Schwarz
No No inequality
2
Deriving the matched filter (8/8)
Inverse transform
Assume g(t) is real. This means g(t)=g*(t)
If F g (t ) G f
F g * (t ) G * f
then G f G f for real signal g(t)
*
G * f G f through duality
Find h(t) (inverse transform of H(f))
h(t ) k G f e j 2fT e j 2ft df
k G f e j 2f T t df h(t) is the time-reversed and
delayed version of the input
k G f e j 2f T t df signal g(t).
becomes:
Sampling at t = T, we get: y j t x d
j
The equivalence of correlation and
matched filter receivers (3/3)
Matched filters
So we can see that the
detector part of the
receiver may be
implemented using either
matched filters or
correlators. The output of
each correlator is Correlators
equivalent to the output of
a corresponding matched
filter when sampled at t =
T.
Maximum Likelihood Receiver
• The transmitter sends one of M signals
si(t), for i=1,2,…,M
• The M signals forms a constellation in the
signaling space
s1
s5
s2
s3
s6
s7 s4
• The received signal x(t) = si(t) + n(t) is
decomposed to its components in the
signal space.
X ∫ xi=si1+n1
(t)
X x2=si2+n2
∫
ŝi(t)
(t) Decision
xt)
Rule
xN=siN+nN
X ∫
(t)
E xj s En s
ij j ij
var x var n
N 0
j j
2
1 x sij 2
f X j x exp
N 0
N0
f X |sj x1 , x2 ,..., x N
N N 2
exp
N xi s ji
2
| sj 0
i 1 N0
x
N
ˆ j min
s d x ,sj i s ji
sj
i 1
The maximum Likelihood receiver picks the signal that is closed to the
received signal in the signal space
s1
s5
x s2
dmin s3
s6
s7 s4
EECE 477/545
Communication System II
(Digital Communications)
Dr. X. Li
1) Time-domain
N signal & Signal space representation
si (t ) j 1 aij j (t ) si (ai1 , , aiN )
r (t ) si (t ) n(t ) r si n (ai1 , , aiN ) (n1 , , nN )
{ j (t )}
2) Use basis as matched filters
...
1 (T t )
Sample at
...
r1 ai1 n1
r
r (t ) t=T
N (T t ) rN aiN nN
• Why does the figure give the matched filter output?
symbols
MF Detector
r(t) z (T ) r sˆi or bits
2) What is a detector?
• Detector is a decision-making device that makes decisions of the
transmitted symbols based on the received signal samples (or,
map PAM samples z(T) into symbols or binary digits).
3) How does a detector work?
• Detector works based on the statistical property of the received
samples and the transmitted symbols
• Detector is important because of random noise
• Objective is to minimize decision error probability
II. Maximum likelihood detector
– Maximum likelihood detector is a detector that
minimizes decision error probability by choosing
the symbols that most likely produce the
received samples.
• “Most likely”: quantified by likelihood function
– Likelihood function for si: the probability of the
received samples when the transmitted symbol
is si
• Example, if tossing a die, what is the likelihood of
obtain “1”, etc?
• In our case, if 1 is transmitted, what is the likelihood
that 0.5 or -.4 is received (due to noise)?
III. Binary PAM transmission case
1) Input to the detector (use signal space
representation)
simplify notation
z (T ) ai (T ) n0 (T )
z ai n0
where ai ( Eb ), i 1, 2, is determined by symbol si
N0
n0 noise: Gaussian, zero mean, variance 02
2
sˆ arg max p ( z ai )
ai
• Example 6.2. {s1=0,s2=1} produce samples {a1=0.1,a2=-
0.1}, N0/2=10-3. If the received sample z=0.05. What is
the best decision?
4) Maximum likelihood decision rule can be further
simplified to minimum distance decision rule
1
ln p ( z | ai ) ( z ai )2 ln( 0 2 )
2 02
arg max p( z ai ) is equivalent to arg min ( z ai ) 2
ai ai
a1 a2 Eb , we have 0
IV. Extend Binary Decision Rule to M-ary
1) M-ary PAM: there are M different symbols si,
which give M different ai, and hence M likelihood
functions p(z|ai).
• Minimum distance decision rule: find the signal point ai
that is closest to the received value z.
• Exercise 6.4. Determine the decision rule for M=4
PAM.
2) QPSK
• We have 2-dimensional signal space, each sampling
value z or signal point ai is a 2-dimensional vector.
• Minimum distance decision rule
and
maximum
likelihood
Xiaoan Wu
Thomas Bayes
Feb 16, 2005
(1702-1761)
Outline
• An example: count # of fish in a pond
• Bayes’ Theorem and maximum likelihood
• Another example: Quasar selection in SDSS
• Maximum likelihood estimators: consistent
and unbiased?
• Minimum variance bound: error estimation
• Relation between MLE and least-squares
fitting
• My research work: mass distribution of M87
An example: # of fish in a
pond
Q: There are more than 10,000 fish in a pond. How to
estimate the number of a fish if you are the
manager of the pond?
A: 6 steps:
1) catch 1000 fish
2) mark them and put them back to the pond
3) catch another 1000 fish in a few days and see
how many have marks on them, say, 10.
4) The fraction of marked fish: 10/1000=1/100
5) The number of fish = 1000/(1/100)=100, 000
6) 1-σerror = 35, 000 (Maximum Likelihood)
Bayes’ theorem
P ( A B ) P ( A | B ) P( B) P( B | A) P ( A)
P( A | B) P( B)
P ( B | A)
P ( A)
A : observation
Br : hypotheses, e.g. parameters
H : prior information
P ( Br | A, H ) P ( Br | H ) P ( A | Br , H ) P( Br | H ) L( A | Br , H )
"What you know about Br after the data A arrive is what you knew
before [P ( Br | H )] and what the data told you [L( A | Br , H )]"
Posterior likelihood prior (Mackay)
Maximum likelihood
Bayes' Theoem: P( Br | A, H ) P( Br | H ) L( A | Br , H )
1
Maximum Likelihood: P ( Br | H ) in case we have no prior
N
information, where N is the total number of hypotheses.
Theory Example
Br Fish # : r
Observation A: 10 marked and 990 unmarked
Prior information H : none
P( Br | H ) constant
1000 r 1000
10 990
L( A | Br , H )
r
1000
1-σerror = 35, 000
Another example: Quasar selection in
SDSS
Richards et al., 2002, AJ, 123, 2945
Prior information:
1. Quasars are point sources
2. Efficiency is poorer in the galactic plane due to contamination
from stars.
3. Quasars and stars occupy different regions in color-color space
P ( Br | A, H ) P ( Br | H ) L( A | Br , H )
Br : whether an object is quasar
H : all prior information we can have
A: observed in photometric plates
Maximum likelihood estimation
f ( x;1 , 2 , , k )
: unkown constant parameters, {1 , 2 , , k }
X : N independent observations, {x1 , x2 , , xN }
N
L( X | ) f ( xi ; ), N-dim p.d.f of IIDs.
i 1
N
ln L ln f ( xi ; )
i 1
0
Maximum likelihood estimators
( x )2
1
f ( x) e 2
2
X : N independent observations, {x1 , x2 , , xN }
( xi )2
1
L( X | , ) e
2
2
N 1 xi
ln L ln(2 ) N ln
2 2
1
2 ( xi ) 0
1 1
xi ( xi ) 2
2
N N
X={10, 20, 30, 40, 50}
Mean of estimators: consistency and bias
Consistency: N , | t | 0
N -a
if t is consistent, t is also consistent
N -b
Unbiased: E (t )
Estimators:
1 1 1
t
N
xi , t 2 N
i
( x ) 2
, t
2
N 1
i
( x ) 2
1
E (t ) E (
N
xi ) , consistent and unbiased
( N 1) 2
E (t 2 ) , consistent but biased
N
E (t 2 ) 2 , consistent and unbiased
Variance of estimators: Minimum
variance bound
1
Var t
2 ln L
E( )
2
ln L
equality holds if t is a linear function of .
1
For example, t xi is a MVB estimator
N
1
t 2 ( xi ) 2 is a MVB estimator for 2 ,
N
but not a MVB estimator for
In general, maximum likelihood estimators are biased and not
MVB estimators, thus errors should be obtained by bootstrap
estimation. If N , the estimators become consistent,
unbiased and minimum variance estimators.
Relation between MLE and least-
squares fitting
A function y f ( x, )
{ yi f ( xi ) yi }
2
N 1 yi f ( xi , )
ln L ln(2 ) ln i
2 2 i
2
yi f ( xi , )
maximize maximize 2
i
The same as least-squares procedure.
Test of hypotheses
• Likelihood ratio
• Kolgomorov-Smirnov test
My research work: mass distribution of M87
Summary of MLEs
Pros
• No data binning, all observed information used.
• They become unbiased minimum variance estimators
as the sample size increases.
• They can generate confidence bounds.
• Likelihood functions can be used to test hypotheses
about models and parameters.
Cons
• MLEs can be heavily biased.
• Calculating MLEs is often computationally
expensive.
Conclusions
Lecture 13
Outline
• Transmitting one bit at a time
• Matched filtering
• PAM system
• Intersymbol interference
• Communication performance
Bit error probability for binary signals
Symbol error probability for M-ary (multilevel) signals
• Eye diagram
13 - 58
Transmitting One Bit
• Transmission on communication channels is analog
• One way to transmit digital information is called
2-level digital pulse amplitude modulation (PAM)
x0 (t ) y0 (t ) receiv
‘0’ bit e ‘0’ bit
b input output b
Additive Noise
t Channel
x(t) y(t)
-A -A
x1 (t ) y1 (t ) receiv
How does the receiver e‘1’ bit
A decide which bit was A
sent?
b t b t
‘1’ bit 13 - 59
Transmitting One Bit
• Two-level digital pulse amplitude modulation over
channel that has memory but does not add noise
x0 (t ) y0 (t ) receiv
‘0’ bit e ‘0’ bit
b input output h h+b
Communicatio t
t n
x(t) y(t) -A
-A Channel
Th
x1 (t ) h(t ) y1 (t ) receiv
Model channel e‘1’ bit
A as LTI system 1 A Th
with impulse
t
b t response h(t) h t h h+b
‘1’ bit Assume that Th < Tb
13 - 60
Transmitting Two Bits (Interference)
• Transmitting two bits (pulses) back-to-back
will cause overlap (interference) at the receiver
x(t ) h(t )
y (t )
A
* 1 =
b h+b
b t h t b t
-A
Th
‘1’ bit ‘0’ bit Assume that Th < Tb ‘1’ bit ‘0’ bit
b t h t h b t
-A
Th
‘1’ bit ‘0’ bit Assume that Th < Tb ‘1’ bit ‘0’ bit
Disadvantages?
• Option #2: use channel equalizer in receiver
FIR filter designed via training sequences sent by transmitter
Design goal: cascade of channel memory and channel
equalizer should give all-pass frequency response
13 - 62
Digital 2-level PAM System
ak{-A,A} s(t) x(t) y(t) y(ti)
bi Decisio 1
PAM g(t) h(t) c(t)
Sample at
n
bits
Maker 0
t=iTb
pulse AWGN matche
Clock Tb Threshold
shape w(t) d filter Clock T
r
b
N0 p
opt ln 0
4 ATb p1
Transmitter Channel Receiver
• Transmitted signal s (t ) a k
k g (t k Tb )
• Requires synchronization of clocks between
transmitter and receiver
13 - 63
Matched Filter
• Detection of pulse in presence of additive noise
Receiver knows what pulse shape it is looking for
Channel memory ignored (assumed compensated by other
means, e.g. channel equalizer in receiver)
g(t) x(t) y(t) y(T) T is the
h(t)
symbol
Pulse t=T period
signal Matched
w(t) filter
Pulse t=T
signal Matched
w(t) filter
13 - 65
Power Spectra
• Deterministic signal x(t) • Autocorrelation of x(t)
w/ Fourier transform X(f) Rx ( ) x( ) * x* ( )
Power spectrum is square of Maximum value at Rx(0)
absolute value of magnitude
response (phase is ignored) Rx() is even symmetric, i.e.
2
P ( f ) X ( f ) X ( f ) X *( f ) Rx() = Rx(-)
x
x(t)
Multiplication in Fourier 1
domain is convolution in
time domain 0 Ts t
Conjugation in Fourier domain Rx()
is reversal and conjugation Ts
in time
X ( f ) X * ( f ) F x( ) * x* ( ) -Ts Ts
13 - 66
Power Spectra
• Power spectrum for signal x(t) is Px ( f ) F Rx ( )
Autocorrelation of random signal n(t)
Rn ( ) E n(t ) n (t ) n(t ) n* (t ) dt
*
For zero-mean Gaussian n(t) with variance 2
Rn ( ) E n(t ) n* (t ) 2 ( ) Pn ( f ) 2
• Estimate noise power
spectrum in Matlab noise floor
N = 16384; % number of
samples
gaussianNoise = randn(N,1); 13 - 67
plot( abs(fft(gaussianNoise)) .^ 2
Matched Filter Derivation
Noise power
g(t) x(t) y(t) y(T) spectrum SW(f)
h(t)
N0
Pulse t=T
2
signal w(t) Matched filter f
N0
• Noise n(t ) w(t ) * h(t ) S N ( f ) SW ( f ) S H ( f ) | H ( f ) |2
2
AWGN Matched
N0 filter
E{ n 2 (t ) } S N ( f ) df | H ( f ) | 2
df
2
• Signal g 0 (t ) g (t ) * h(t ) G0 ( f ) H ( f )G ( f )
j 2
g 0 (t ) H ( f ) G ( f ) e f t
df
j 2
| g 0 (T ) |2 | H ( f ) G ( f ) e fT
df |2 13 - 68
Matched Filter Derivation
• Find h(t) that maximizes pulse peak SNR
H ( f ) G( f ) e
j 2 f T
| df |2 a
N0
2
| H ( f ) | df
2
b
• Schwartz’s inequality T
a b
For vectors: | a b | || a || || b || cos
T *
|| a || || b ||
2
( x) ( x) ( x)
2 2
For functions: 1
*
2 ( x) dx 1 dx 2 dx
- - -
N0
N0 | G ( f ) |2 df
2
| H ( f ) |2 df
2
max
2
| G ( f ) | df , which occurs when
N 0
H opt ( f ) k G * ( f ) e j 2 f T k by Schwartz ' s inequality
Hence, hopt (t ) k g * (T t )
13 - 70
Matched Filter
• Given transmitter pulse shape g(t) of duration T,
matched filter is given by hopt(t) = k g*(T-t) for all k
Duration and shape of impulse response of the optimal filter is
determined by pulse shape g(t)
hopt(t) is scaled, time-reversed, and shifted version of g(t)
• Optimal filter maximizes peak pulse SNR
2 2 2 Eb
max | G ( f ) | df | g (t ) | dt SNR
2 2
N 0 N 0 N0
Does not depend on pulse shape g(t)
Proportional to signal energy (energy per bit) Eb
Inversely proportional to power spectral density of noise13 - 71
Matched Filter for Rectangular Pulse
• Matched filter for causal rectangular pulse has an
impulse response that is a causal rectangular pulse
• Convolve input with rectangular pulse of duration
T sec and sample result at T sec is same as to
First, integrate for T sec
Second, sample at symbol period T sec Sample and
dump
Third, reset integration for next time period
• Integrate and dump circuit
t=kT T
h(t) = ___ 13 - 72
Digital 2-level PAM System
ak{-A,A} s(t) x(t) y(t) y(ti)
bi Decisio 1
PAM g(t) h(t) c(t)
Sample at
n
bits
Maker 0
t=iTb
pulse AWGN matche
Clock Tb Threshold
shape w(t) d filter Clock T
r
b
N0 p
opt ln 0
4 ATb p1
Transmitter Channel Receiver
• Transmitted signal s (t ) a k
k g (t k Tb )
• Requires synchronization of clocks between
transmitter and receiver
13 - 73
Digital 2-level PAM System
• Why is g(t) a pulse and not an impulse?
Otherwise, s(t) would require infinite bandwidth
s (t ) ak (t k Tb )
k
Since we cannot send an signal of infinite bandwidth, we limit
its bandwidth by using a pulse shaping filter
• Neglecting noise, would like y(t) = g(t) * h(t) * c(t)
to be a pulse, i.e. y(t) = p(t) , to eliminate ISI
y (t ) ak p (t kTb ) n(t ) where n(t ) w(t ) * c(t ) p(t) is
k
centere
y (ti ) ai p (ti iTb ) ak p (i k )Tb n(ti ) d at
k ,k i
origin
actual value intersymbol noise
(note that ti = i interference (ISI)
13 - 74
Tb)
Eliminating ISI in PAM
• One choice for P(f) is a 1
rectangular pulse 2 W ,W f W
P( f )
W is the bandwidth of the 0
, | f | W
system
Inverse Fourier transform
of a rectangular pulse is 1 f
P( f ) rect ( )
is a sinc function 2W 2W
p (t ) sinc(2 W t )
• This is called the Ideal Nyquist Channel
• It is not realizable because pulse shape is not
causal and is infinite in duration
13 - 75
Eliminating ISI in PAM
• Another choice for P(f) is a raised cosine spectrum
1
0 | f | f1
2W
1
P( f ) 1 sin (| f | W ) f1 | f | 2W f1
2W 2 f
4W 1
0 2W f1 | f | 2W
• Roll-off factor gives bandwidth in excess f1
1
of bandwidth W for ideal Nyquist channel W
• Raised cosine pulse p(t ) sinc t cos 2 2 W2 t 2
T 1 16 W t
has zero ISI when s
Sample
t = at r (t ) s (t ) v(t )
Matche
v(t) d filter nTb r(t) = h(t) * r(t)
Tsym M i 1 T M i 1 3Tsym 13 - 82
PAM Symbol Error Probability
• Noise power and SNR • Consider M-2 inner
1
sym / 2
N0 N0 levels in constellation
PNoise
2
/2
2
d
2Tsym Error if and only if
sym
| vR (nTsym ) | d
two-sided power
spectral density of where 2 N 0 / 2
AWGN
PSignal 2( M 2 1) d 2 Probablity of error is
SNR
PNoise 3 N0 d
P (| vR (nTsym ) | d ) 2 Q
• Assume ideal channel, • Consider two outer
i.e. one without ISI levels in constellation
x (nTsym ) an v R ( nTsym ) d
P(vR (nTsym ) d ) Q
channel noise
13 - 83
filtered by receiver
and sampled
Optional
2 sym / 2
T 13 - 84
PAM Symbol Error Probability
• Assuming that each symbol is equally likely,
symbol error probability for M-level PAM
M 2 d 2 d 2( M 1) d
Pe 2 Q
Q Q
M M M
13 - 85
Visualizing ISI
• Eye diagram is empirical measure of signal quality
g (nTsym kTsym )
x(n) ak g (n Tsym k Tsym ) g (0) an ak
k k g ( 0 )
k n
• Intersymbol interference (ISI):
g (nTsym kTsym ) g (kTsym )
D ( M 1) d
k , k n g (0)
( M 1) d
k , k n g (0)
M=2
Margin over noise
Slope Distortion over
indicates zero crossing
sensitivity to
timing error Interval over which it can be sampled
t - Tsym t t + Tsym
-3d
13 - 88
Optimum Receiver Design
Noise Model
Equivalent Vector Channel Model
cont …
Theorem of Irrelevance:
MAP Optimum Decision Rule
ML Decision Rule
Evaluation of Probabilities
cont …
cont …
cont …
Optimum Receiver Structure
cont …
cont …
Key Observations
Optimum Correlation Receiver
Simplified Receiver Cases
cont …
cont …
Matched Filter Receiver
cont …
Example : Receiver Design
cont …
cont …
cont …
cont …
cont …
Key Facts
The Ubiquitous
MATCHED FILTER
. . . it’s everywhere!!
119
How do receivers work?
A brief review, then, of
network theory
120
“characterizing” a receiver (or generally, a network)
by its “impulse response function”
Why?
121
Do you see how we can represent any signal s(t) as a
collection of impulses . .?
122
Also, our signal (certainly in radio/radar work)
is surely bipolar-cyclic . . a modulated carrier
123
+
124
About designing our receiver . .
125
For maximum sensitivity to our own signal’s being at
the input, we don’t need to see a copy of it at the output . .
. . we simply need, at the output, the greatest possible
indication of its presence at the input.
126
Yes, noise . . always present in radio work . .
Admitting, then, that our total input is, say, “gi(t)”,
made up of our desired signal AND accompanying
(but wholly independent) noise, we see that we can
treat these two inputs separately as they pass through
our receiver:
and our output is just the sum of the two convolutions – one due
to the signal being present and one from the always-present noise.
Answer . .
. . various methods (differential calculus; the “Schwartz
inequality”) lead to the conclusion that maximum sensitivity
is achieved when the IRF is the complex conjugate of the
subject signal:
Concepts:
► Signal to Noise Ratio (SNR) as a measure of sensitivity
► Representing our cyclic signal in this “A ejθ “ form
and, note that some details of necessary time
displacement notation are overlooked here
128
This, then, is
129
Representing our signal by a rotating vector . .
. . a great convenience
130
Some discussion . . to improve our understanding
131
In conjugating the phase modulation of our signal, why multiply
in the convolution by the amplitude as well?
133
Where do we use
the Matched Filter?
Some illustrations . .
Pulse compression, very common in radar . .
134
Illustration # 1 . . Pulse Compression
135
Binary phase coding
with a “tapped delay line”
first bit out
Showing a 180o
phase shift on one tap,
giving the binary code sequence “ + - + + “ , one of the Barker codes.
+ + - +
Clearly, pulse compression is a “convolution” process, and we see the “time” or “range” sidelobes
in the output which, for all the Barker binary codes, are never more than unity value, while the narrow
main peak is full value, the number of bits in the code. In this matched situation, the output is the
“autocorrelation function”, and a low sidelobe level is a very desirable attribute of a candidate code.
136
A peculiar thing about binary phase coding
The idea that the “tapped delay line, backwards”
is indeed a conjugating matched filter is not so clear
in binary phase coding . . adding or subtracting 180°
results in the same zero phase for that bit (all bits
then being phase aligned).
137
The Barker codes: sidelobe level
Length 2 + - and + + - 6.0 dB
3 ++- - 9.5
4 + + - + and + + + - - 12.0
5 +++-+ - 14.0
7 +++--+- - 16.9
11 +++---+--+- - 20.8
13 + + + + + - - + + - + - + - 22.3 dB
Modulo 2 adder
138
Illustration # 2 . . Antennas; the “Adaptive” Array
139
rst, consider a few discrete elements of a phased array, a line array . .
140
. . continuing . .
141
The “Adaptive Antenna” . .
Discussion
142
Illustration # 3 . . Space-Time Adaptive Processing, STAP
143
Doppler filtering
Theory View a single Doppler “filter” as a classic “Matched Filter”,
that is, we multiply (convolve) the input signal
with the conjugate of the signal being sought.
sample #1 2 3 4
signal
reference
product
Recall, phase angles add when complex numbers (vectors) are multiplied – that is,
the signal is “rotated back” in phase by the amount it might have been progressing
in phase . . To the extent that such a component was in the input signal will we get
an output in this particular filter. We’ve built a Matched Filter for that component
(that frequency component) alone. HOWEVER, this is best ONLY IF background
noise is utterly random in Doppler frequencies . .
144
The airborne radar situation . . for discussion
Broadband interference
(jamming) suggests need
for adaptive antenna
Terrain features
contribute to
non-uniform spectrum
of the side-lobe coupled
ground clutter
An airborne radar
145
STAP – to be adaptive in both the antenna’s pattern (as before discussed)
and also in the weights to put on each pulse return to shape the
Doppler filters in spectrum (compensating for the non-uniform
spectrum of the background clutter)
Space
146
Illustration # 4 . . The Polarimetric Matched Filter
147
Radar Polarimetry . . a little review
► Polarization of an Electro-Magnetic wave is taken as the
spatial orientation of the E-field . . most, but certainly
not all, radars are designed to operate, for various
reasons, in either horizontal or vertical (linear)
polarization, fixed by the antenna design – that is,
they are not “polarimetric”
► The work under Dr. Les Novak (MIT/Lincoln Laboratories) in the 1990s is extremely
valuable in establishing these approaches to image enhancement by polarimetry.
A number of papers in our conferences (to be cited here) and other teaching material
he has provided me contribute to this instruction. An airborne SAR at 33 GHz,
fully polarimetric, was used in many valuable experiments there.
► The idea of “whitening” and “matching” is universal, forms matched filter theory.
● The matched filter: attempts to maximize the target intensity to clutter intensity
in the combined image, by using weights based on knowledge (estimates)
of the polarimetric covariance of target AND clutter returns.
149
(Polarimetry and image enhancement, cont.)
151
The End
More than you wanted to know about
152