Chapter6-Wiener Filters and The LMS Algorithm-Pp32
Chapter6-Wiener Filters and The LMS Algorithm-Pp32
Marc Moonen
Dept. E.E./ESAT-STADIUS, KU Leuven
[email protected]
www.esat.kuleuven.be/stadius/
Part-III : Optimal & Adaptive Filters
See Part-II
1. ‘Classical’ filter design
lowpass/ bandpass/ notch filters/ ...
‘Optimal’
2. ‘Optimal’
2. Filter
filter designDesign filter input
filter input
Design filter such that for a given
(i.e. ‘statistical info available’)
input signal, filter output signal is filter filter parameters
‘optimally close’ (to be defined)
to a given ‘desired output signal’. filter output
+
‘Adaptive’
3.Adaptive
3. filter sFilters
• self-designing
• adaptation algorithm to monitor environment
• properties of adaptive filters :
convergence/ tracking
numerical stability/ accuracy/ robustness
computational complexity
hardware implementation
10
plant input
adaptive plant
filter
echo path
near-end signal
near-end signal
+ echo
17
ex am pl e : interference cancellation
reference sensor
noise
+ signal source
signal signal
+ residual noise primary sensor
+ noise
18
noise source
noise
adaptive
filter reference sensors
+
signal source
signal signal primary sensor
+ residual noise + noise
11
ex am pl e : channel identification
radio channel
adaptive
filter base station antenna
+ mobile receiver
20
Inverse modeling :
I nt r oduct i on : A ppl i cat i ons
training sequence
0,1,1,0,1,0,0,...
radio channel
adaptive
filter
+ training sequence
0,1,1,0,1,0,0,...
26
P r ot ot y p e opt i m al fi l t er r ev i si t ed
Have to decide on 2 things..
filter structure ? filter input
1
→ FIR filters
(= pragmatic choice)
filter filter parameters
2 cost function ?
filter output
→ quadratic cost function
+
(= pragmatic choice)
error desired signal
l=0 yk = w T · u k
previous chapters)
e[k] filter output y[k]
where +
d[k]
wTT =é
w 0 w 1 …2 w ùN − 1
error desired signal
w w w ··· w
w =ë 0
b w
0 L û a+bw a
• convergence problems
0
filter output
+
28
PS: iCan
Opt m al generalize FIR
fi l t er i ng/ W filterfi lto
i ener t er‘multi-channel
s FIR filter’
example: see page 11
Multi-channel FIR filters
filter input 1 filter input 2 filter input 3
w0[k] w1[k] w2[k] w3[k] w4[k] w5[k] w6[k] w7[k] w8[k] w9[k] w10[k] w11[k]
0 0 0
+ + +
29
PS: Special case of ‘multi-channel FIR filter’ is ‘linear combiner’
Opt i m al fi l t er i ng/ W i ener fi l t er s
y = u
T
T
wk
ykk= w k· u +
é −L1 ù
a+bw a
2
30
(w) = E {e } = E {( d ) } = E {(d - u w) }
2
-y
2 T
J 2
2 2 T 2
M SE (w ) = E{ kek } = E{ |dkk − ykk | } = E{ |dkk − w k u k | }
JMSE
35
be expanded as…
filter filter parameters
{( )}
error e[k] desired signal d[k]
2} 2
= E dk - u kTw 2
J M SE (w ) = E{ |ek | T
= E{ |dk − w · u k | }
= E{ d2 T T T
k } + w E{ u k u k } w − 2w E{ u k dk } .
X̄ uu X̄ du
X̄ uu = correlation matr ix X̄ du = cross-correlation vector
34
Correlation
Everything you matrix hasabout
need to know a special structure…
stochastic processes (I V)
36
MMSE
Opt i m alcost
fi l t erfunction canfi l t er s
i ng/ W i ener
be expanded as…(continued)
J M SE (w ) = E{ d2 T T T
k } + w E{ u k u k } w − 2w E{ u k dk } .
X̄ uu X̄ du
X̄ uu · w W F = X̄ du → w W F = X̄ − 1
uu X̄ du .....simple enough!
38
How do we
Everything yousolve
need to the
knowWiener–Hopf equations?
about matrices and vector s (I )
( L+1
solving linear systems (N linear
linear equations
equations inin
NL+1 unknowns)
unknowns):
12 3 1
· wWF = → wWF =
34 7 1
as follows
Replace n+1 by n for convenience…
w(n) = w(n -1)+ m.(E{uk .dk }- E{uk .uTk }.w(n -1))
Then replace iteration index n by time index k
k
(i.e. perform 1 iteration per sampling interval)
Whenever LMS has reached the WF solution, the expected value of u k .(dk - uTk.wLMS [k -1])
(=estimated gradient in update formula) is zero, but the instantaneous value is generally non-
DSP 2016
zero / Chapter-6:
(=noisy), andWiener
henceFilters
LMS& the
willLMS Algorithm
again move away from the WF solution! 29 / 32
Adaptive Filtering: LMS Algorithm
[¥]
[¥] [¥]
LL
ål i
i=0
å
LL
i=0
li L L
0.2
m<
L.E{uk2 }
means step size has to be much smaller…!
mm
w
wNLMS
NLMS[k] = w NLMS
(k +1) = w[k -1]+
(k) + .u .(d
.u -
k k -k .w
k k .(d u T T
u k .w
NLMS [k -1])
(k))
NLMS
aa+ + T T
u ku.uk .u
k k
NLMS