0% found this document useful (0 votes)
44 views

Slides Filtering Time Frequency Analysis Tutorial

The document discusses optimal filtering and Wiener filtering. It covers least-squares and Wiener filter estimation, including stochastic estimation, computational aspects, and frequency domain formulation. Applications of Wiener filtering discussed include noise reduction and time alignment of signals. Implementation topics include filter order and filter banks. The document also provides references to sections of a textbook that cover Wiener filtering in more detail.

Uploaded by

bo.liu.7987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Slides Filtering Time Frequency Analysis Tutorial

The document discusses optimal filtering and Wiener filtering. It covers least-squares and Wiener filter estimation, including stochastic estimation, computational aspects, and frequency domain formulation. Applications of Wiener filtering discussed include noise reduction and time alignment of signals. Implementation topics include filter order and filter banks. The document also provides references to sections of a textbook that cover Wiener filtering in more detail.

Uploaded by

bo.liu.7987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Digital Signal Processing 2

Les 3: Optimale filtering

Prof. dr. ir. Toon van Waterschoot

Faculteit Industriële Ingenieurswetenschappen


ESAT – Departement Elektrotechniek
KU Leuven, Belgium
Digital Signal Processing 2: Vakinhoud
• Les 1: Eindige woordlengte
• Les 2: Lineaire predictie
• Les 3: Optimale filtering
• Les 4: Adaptieve filtering
• Les 5: Detectieproblemen
• Les 6: Spectrale signaalanalyse
• Les 7: Schattingsproblemen 1
• Les 8: Schattingsproblemen 2
• Les 9: Sigma-Deltamodulatie
• Les 10: Transformatiecodering
Les 3: Optimale filtering
• Introduction

• Least-squares and Wiener filter estimation


stochastic & deterministic estimation, computational aspects,
geometrical interpretation, performance analysis, frequency
domain formulation, …

• Wiener filtering applications


noise reduction, time alignment of multi-channel/-sensor signals,
channel equalization, …

• Wiener filter implementation


filter order, filter-bank implementation, …
Les 3: Optimale filtering
• Introduction
• Least-squares and Wiener filter estimation
S. V. Vaseghi, Multimedia Signal Processing
- Ch. 8, “Least Square Error, Wiener-Kolmogorov Filters”
• Section 8.1, “LSE Estimation: Wiener-Kolmogorov Filter”
• Section 8.2, “Block-Data Formulation of the WF”
• Section 8.3, “Interpretation of WF as Projection in Vector Space”
• Section 8.4, “Analysis of the Least Mean Square Error Signal”
• Section 8.5, “Formulation of WFs in the Frequency Domain”
• Wiener filtering applications
• Section 8.6, “Some Applications of Wiener Filters”
• Wiener filter implementation
• Section 8.7, “Implementation of Wiener Filters”
Les 3: Optimale filtering
• Introduction

• Least-squares and Wiener filter estimation


stochastic & deterministic estimation, computational aspects,
geometrical interpretation, performance analysis, frequency
domain formulation, …

• Wiener filtering applications


noise reduction, time alignment of multi-channel/-sensor signals,
channel equalization, …

• Wiener filter implementation


filter order, filter-bank implementation, …
Introduction
• Optimal filters
- data-dependent filters
- designed such as to minimize “difference” between filter
output signal and desired or target signal
- many applications: linear prediction, echo cancellation, signal
restoration, channel equalization, radar, system identification
• Wiener filters
- filters for signal prediction or signal/parameter estimation
- optimal for removing effect of linear distortion (filtering) and/or
additive noise from observed data
- many flavors: FIR/IIR, single-/multi-channel, time-/frequency-
domain, fixed/block-adaptive/adaptive
Les 3: Optimale filtering
• Introduction

• Least-squares and Wiener filter estimation


stochastic & deterministic estimation, computational aspects,
geometrical interpretation, performance analysis, frequency
domain formulation, …

• Wiener filtering applications


noise reduction, time alignment of multi-channel/-sensor signals,
channel equalization, …

• Wiener filter implementation


filter order, filter-bank implementation, …
Least-squares and Wiener filter estimation
• Stochastic Wiener filter estimation
• Deterministic least squares estimation
• Computational aspects
• Geometrical interpretation
• Performance analysis
• Frequency domain formulation
signal x̂!m", where x̂!m" is the least mean square error estimate of the desired or target signal x!m
The filter input–output relation is given by

Stochastic Wiener
x̂!m" =
filter
! estimation (1)
w y!m − k"
P−1

k
k=0 (8.1
• FIR Wiener filter: = wT y
- signal flow graph

Input y (m) y (m – 1) y(m – 2) y(m – P – 1)


z–1 ... z–1
z–1

w0 w1 w2 wP–1
Wiener–1
w = Ryy rxy
filter

FIR Wiener Filter

Desired signal
x(m)
ˆ
x(m)

Figure 8.1 Illustration of a Wiener filter. The output signal x̂!m" is an estimate of the desired signal x!m". It
Stochastic Wiener filter estimation (2)
• FIR Wiener filter:
- input signal = noisy or distorted observed data
⇥ ⇤T
y = y(0) y(1) ... y(N 1)
- desired signal = (unknown) clean data
⇥ ⇤T
x = x(0) x(1) ... x(N 1)
- Wiener filter coefficients
⇥ ⇤T
w = w0 w1 ... wP 1

- input-output relation
P
X1
x̂(m) = wk y(m k) = wT y
k=0
Stochastic Wiener filter estimation (3)
• Wiener filter error signal:
- error signal = desired signal – Wiener filter output signal
e(m) = x(m) x̂(m) = x(m) wT y
- stacking error signal samples for m = 0, . . . , N 1 yields
2 3 2 3 2 32 3
e(0) x(0) y(0) y( 1) ... y(1 P) w0
6 e(1) 7 6 x(1) 7 6 y(1) y(0) ... y(2 P) 7 6 7
6 7 6 7 6 7 6 w1 7
6 .. 7=6 .. 7 6 .. .. .. .. 7 6 .. 7
4 . 5 4 . 5 4 . . . . 54 . 5
e(N 1) x(N 1) y(N 1) y(N 2) ... y(N P ) wP 1

e=x Yw
- initial conditions y(1 P ), . . . , y( 1) are known or assumed
zero (cf. Les 2: autocorrelation vs. covariance method)
Stochastic Wiener filter estimation (4)
• Number of solutions
- Wiener filter is optimal filter in sense of minimizing mean
squared error signal
e ⇡ 0 ) x ⇡ Yw N ⇡ P

P
- 3 different cases, depending on no. observations N and
Wiener filter length P (cf. Les 2: linear systems of equations)
N = P square system, unique solution with e = 0
N < P underdetermined system, 1 solutions with e = 0
N > P overdetermined system, no solutions with e = 0 ,
unique solution with “minimal” e 6= 0
Stochastic Wiener filter estimation (5)
• Wiener filter estimation:
- mean squared error (MSE) criterion
E{e2 (m)} = E{(x(m) wT y)2 }
= E{x2 (m)} 2wT E{yx(m)} + wT E{yyT }w
= rxx (0) 2wT ryx + wT Ryy w
- autocorrelation matrix & cross-correlation vector definition:
2 3
ryy (0) ryy (1) ... ryy (P 1)
6 ryy (1) ryy (0) ... ryy (P 2)7
6 7
Ryy =6 .. .. .. .. 7 = E{yyT }
4 . . . . 5
ryy (P 1) ryy (P 2) ... ryy (0)
⇥ ⇤T
ryx = ryx (0) ryx (1) ... ryx (P 1) = E{yx(m)}
where Ryy = E!y"m#yT "m#$ is the autocorrelation matrix of the input signal and rxy = E!x"m#y"m#$ is
the cross-correlation vector of the input and the desired signals. An expanded form of Equation (8.5)
can be obtained as

Stochastic Wiener filter estimation (6)


P−1
! P−1
! P−1
!
E !e2 "m#$ = rxx "0# − 2 wk ryx "k# + wk wj ryy "k − j# (8.6)
k=0 k=0 j=0

where ryy "k# and ryx "k# are the elements of the autocorrelation matrix Ryy and the cross-correlation
vector rxy respectively.
• Wiener filter estimation:
From Equation (8.5), the mean square error for an FIR filter is a quadratic function of the filter
coefficient vector w and has a single minimum point. For example, for a filter with two coeffi-
- mean squared error (MSE) criterion
cients "w0 % w1 #, the mean square error function is a bowl-shaped surface, with a single minimum
point, as illustrated in Figure 8.2. The least mean square error point corresponds to the minimum
error power. At this operating point the mean square error surface has zero gradient. From Equa-
2 T T
E{e (m)} = r (0)
tion (8.5), the gradient of
given by
xx 2w r
the mean square error function
yx +w R
with respect to the filter
yy coefficient vector is w
= quadratic function & of Wiener filter coefficient vector w
2
E !e "m#$ = −2E !x"m#y"m#$ + 2w E !y"m#y "m#$ T T
&w (8.7)
- quadratic function (with =full-rank −2r + 2w R Hessianyx
matrix
T R
yy
yy ) is
always convex and has unique minimum
2
- example: ! [e ]

P =2

woptimal w1

w0

Figure 8.2 Mean square error surface for a two-tap FIR filter.
where Ryy = E!y"m#yT "m#$ is the autocorrelation matrix of the input signal and rxy = E!x"m#y"m#$ is
the cross-correlation vector of the input and the desired signals. An expanded form of Equation (8.5)
can be obtained as

Stochastic Wiener filter estimation (7)


P−1
! P−1
! P−1
!
E !e2 "m#$ = rxx "0# − 2 wk ryx "k# + wk wj ryy "k − j# (8.6)
k=0 k=0 j=0

where ryy "k# and ryx "k# are the elements of the autocorrelation matrix Ryy and the cross-correlation
vector rxy respectively.
• Wiener filter estimation:
From Equation (8.5), the mean square error for an FIR filter is a quadratic function of the filter
coefficient vector w and has a single minimum point. For example, for a filter with two coeffi-
- minimum MSE solution is obtained at point with zero gradient
cients "w0 % w1 #, the mean square error function is a bowl-shaped surface, with a single minimum
point, as illustrated in Figure 8.2. The least mean square error point corresponds to the minimum
error power. At this operating point the mean square error surface has zero gradient. From Equa-
- gradient of MSE criterion w.r.t. Wiener filter coefficient vector
tion (8.5), the gradient of the mean square error function with respect to the filter coefficient vector is
given by
@ @ ⇥ ⇤
E{e2 (m)} =&
xx +
T
r (0) 2w ryx + w (8.7)
E !e2 "m#$ = −2E !x"m#y"m#$ Ryy w
2wT
E !y"m#yT
T
"m#$
@w &w
@w
= −2r + 2w R T
yx yy
= 2ryx + 2Ryy w
! [e2]
- example:
P =2

woptimal w1

w0

Figure 8.2 Mean square error surface for a two-tap FIR filter.
Stochastic Wiener filter estimation (8)
• Wiener filter estimation:
- minimum MSE solution is obtained at point with zero gradient
@
E{e2 (m)} = 0 ) ryx = Ryy w
@w
- minimum MSE Wiener filter estimate:

w = Ryy1 ryx
2 3 2 3 1 2 3
w0 ryy (0) ryy (1) ... ryy (P 1) ryx (0)
6 w1 7 6 ryy (1) ryy (0) ... ryy (P 2)7 6 ryx (1) 7
6 7 6 7 6 7
6 .. 7=6 .. .. .. .. 7 6 .. 7
4 . 5 4 . . . . 5 4 . 5
wP 1 ryy (P 1) ryy (P 2) ... ryy (0) ryx (P 1)
signal x̂!m", where x̂!m" is the least mean square error estimate of the desired or target signal x!m
The filter input–output relation is given by

Stochastic Wiener
x̂!m" =
filter
! estimation (9)
w y!m − k"
P−1

k
k=0 (8.1
• FIR Wiener filter: = wT y
- signal flow graph (revisited)

Input y (m) y (m – 1) y(m – 2) y(m – P – 1)


z–1 ... z–1
z–1

w0 w1 w2 wP–1
–1
Ryyyy1rxy
ww== R ryx

FIR Wiener Filter

Desired signal
x(m)
ˆ
x(m)

Figure 8.1 Illustration of a Wiener filter. The output signal x̂!m" is an estimate of the desired signal x!m". It
Least-squares and Wiener filter estimation
• Stochastic Wiener filter estimation
• Deterministic least squares estimation
• Computational aspects
• Geometrical interpretation
• Performance analysis
• Frequency domain formulation
Deterministic least squares estimation (1)
• Wiener filter input/output relation
- set of N linear equations
2 3 2 32 3
x̂(0) y(0) y( 1) ... y(1 P) w0
6 x̂(1) 7 6 y(1) y(0) ... y(2 P) 7 6 7
6 7 6 7 6 w1 7
6 .. 7=6 .. .. .. .. 7 6 .. 7
4 . 5 4 . . . . 54 . 5
x̂(N 1) y(N 1) y(N 2) ... y(N P ) wP 1

x̂ = Yw
- Wiener filter error signal vector
e = x x̂
= x Yw
Deterministic least squares estimation (2)
• Least squares estimation
- sum of squared errors criterion
NX1
2
e (m) = eT e
m=0
= (x Yw)T (x Yw)
= xT x xT Yw wT YT x + wT YT Yw
- difference with MSE criterion: expectation replaced by time
averaging
• mean squared error: E{e2 (m)} = stochastic criterion
N
X1
• sum of squared errors: e2 (m) = deterministic criterion
m=0
Deterministic least squares estimation (3)
• Least squares estimation
- minimum sum of squared errors is obtained at point with zero
gradient
@eT e
= 2YT x + 2YT Yw = 0 ) (YT Y)w = YT x
@w
- least squares filter estimate:

w = (YT Y) 1
YT x

- if desired/observed signals are correlation-ergodic processes,


least squares estimate converges to Wiener filter estimate
⇥ T 1

lim w = (Y Y) Y x = Ryy1 rxy
T
N !1
Least-squares and Wiener filter estimation
• Stochastic Wiener filter estimation
• Deterministic least squares estimation
• Computational aspects
• Geometrical interpretation
• Performance analysis
• Frequency domain formulation
Computational aspects (1)
• Calculation of correlation functions
- Wiener filter estimate requires autocorrelation matrix Ryy
and cross-correlation vector ryx
- correlation functions are obtained by averaging over
ensemble of different realizations of desired/observed signals
- for correlation-ergodic processes, ensemble averaging can
be replaced by time averaging so only 1 realization is needed
N 1
1 X
ryy (k) = y(m)y(m + k)
N m=0
N
X1
1
ryx (k) = y(m)x(m + k)
N m=0
- choice of N: compromise between accuracy and stationarity
(cf. Les 2: LP modeling of speech)
Computational aspects (2)
• Calculation of correlation functions
- calculation of cross-correlation vector ryx is not
straightforward if desired signal x is unknown

- two possible solutions:


• use prior knowledge about x to estimate ryx

• rewrite cross-correlation function in terms of other correlation


functions (see later: Wiener filtering applications)
Computational aspects (3)
• Computation of least squares filter estimate
- least squares filter estimate: w = (Y T Y) 1 Y T x
- direct matrix inversion has complexity O(P 3 )
- QR decomposition of data matrix (Q = orthonormal,
R = upper-triangular)

T R N = P
Y=Q
0
P N P
allows to compute least squares filter estimate from a square,
triangular system (allowing back-substitution)
Rw = xQ
- QR-based computation of LS estimate has complexity O(P 2 )
(exploiting Toeplitz structure of data matrix)
Least-squares and Wiener filter estimation
• Stochastic Wiener filter estimation
• Deterministic least squares estimation
• Computational aspects
• Geometrical interpretation
• Performance analysis
• Frequency domain formulation
Geometrical interpretation (1)
• Wiener filter input/output relation
- input/output relation x̂ = Yw can also be written as
2 3 2 3 2 3 2 3
x̂(0) y(0) y( 1) y(1 P )
6 x̂(1) 7 6 y(1) 7 6 y(0) 7 6 y(2 P ) 7
6 7 6 7 6 7 6 7
6 .. 7 = w 0 6 .. 7 + w 1 6 .. 7 + . . . + w P 1 6 .. 7
4 . 5 4 . 5 4 . 5 4 . 5
x̂(N 1) y(N 1) y(N 2) y(N P )

x̂ = w0 y0 + w1 y1 + . . . + wP 1 yP 1

- Wiener filter output signal = linear weighted combination of


input signal vectors
(cf. Les 2: two interpretations of matrix-vector product)
Geometrical interpretation (2)
• Vector space interpretation
- set of P input signal vectors {y0 , y1 , . . . , yP 1 } forms P-
dimensional subspace of N-dimensional vector space
- Wiener filter output signal lies in this subspace, since
x̂ = w0 y0 + w1 y1 + . . . + wP 1 yP 1

N = P subspace = entire space, including desired signal


) x̂ = x, e = 0
N > P subspace ⊂ entire space,
output signal is orthogonal projection of desired signal vector
onto subspace
) x̂ 6= x, e 6= 0
⎝ x̂%m − 1& ⎠ = w0 ⎝ y%m − 1& ⎠ + w1 ⎝ y%m − 2& ⎠
x̂%m − 2& y%m − 2& y%m − 3&

Geometrical interpretation
In Equation (8.29), the filter input signal vectors(3) T
y = !y%m&" y%m − 1&" y%m − 2&$ and y
1
y%m − 2&" y%m − 3&$ are 3-dimensional vectors. The subspace defined by the linear c
the two input vectors !y1 " y2 $ is a 2-dimensional plane in a 3-dimensional space. The f
• Vector space interpretation
- example: N = 3, P = 2 Error
2 3 2 3 signal e(m)
x̂(m) ŷ(m) Clean e = e(m – 1) Noisy
4x̂(m 1)5 = w0 4ŷ(m 1)5 signal e(m – 2) y(m) signal
x̂(m 2) ŷ(m 2) x(m) y2 = y(m – 1)
2 3 x = x(m – 1) y(m – 2)
ŷ(m 1) x(m – 2)
+ w1 4ŷ(m 2)5
ŷ(m 3)

Noisy
^
x(m) signal
^ ^ – 1)
x = x(m
^ – 2)
x(m y(m – 1)
y1 = y(m – 2)
y(m – 3)
Least-squares and Wiener filter estimation
• Stochastic Wiener filter estimation
• Deterministic least squares estimation
• Computational aspects
• Geometrical interpretation
• Performance analysis
• Frequency domain formulation
Performance analysis (1)
• Variance of Wiener filter estimate
- substituting Wiener filter esimate w = Ryy1 ryx into MSE
criterion gives error variance

E{e2 (m)} = rxx (0) wT ryx = rxx (0) wT Ryy w

- variance of Wiener filter output signal is


E{x̂2 (m)} = wT Ryy w
so error variance can be written as

E{e2 (m)} = E{x2 (m)} E{x̂2 (m)}


2 2 2
e = x x̂
Performance analysis (2)
• Variance of Wiener filter estimate
- in general, observed data can be decomposed as
y(m) = xc (m) + n(m)

• xc (m) = part of observation correlated with desired signal x(m)


• n(m) = random noise signal
- Wiener filter error signal can be decomposed accordingly
P
!
X1 P
X1
e(m) = x(m) wk xc (m k) wk n(m k)
k=0 k=0
| {z }| {z }
ex (m) en (m)

2 2 2
- error variance is then e = ex + en
Least-squares and Wiener filter estimation
• Stochastic Wiener filter estimation
• Deterministic least squares estimation
• Computational aspects
• Geometrical interpretation
• Performance analysis
• Frequency domain formulation
Frequency domain formulation (1)
• Frequency domain MSE criterion
- frequency domain Wiener filter output and error signal:
X̂(f ) = W (f )Y (f )

E(f ) = X(f ) X̂(f ) = X(f ) W (f )Y (f )


- frequency domain MSE criterion:

E{|E(f )|2 } = E (X(f ) W (f )Y (f )) (X(f ) W (f )Y (f ))
- Parseval’s theorem: sum of squared errors in time domain =
integral of squared error power spectrum
N
X1 Z fs /2
e2 (m) = |E(f )|2 df
m=0 fs /2
Frequency domain formulation (2)
• Frequency domain Wiener filter estimate
- minimum MSE solution is obtained at point with zero gradient
@E{|E(f )|2 }
= 2W (f )PY Y (f ) 2PXY (f ) = 0
@W (f )

- power and cross-power spectra:


PY Y (f ) = E{Y (f )Y ⇤ (f )}
PXY (f ) = E{X(f )Y ⇤ (f )}
- frequency domain Wiener filter estimate:
PXY (f )
W (f ) =
PY Y (f )
Les 7: Optimale filtering
• Introduction

• Least-squares and Wiener filter estimation


stochastic & deterministic estimation, computational aspects,
geometrical interpretation, performance analysis, frequency
domain formulation, …

• Wiener filtering applications


noise reduction, time alignment of multi-channel/-sensor signals,
channel equalization, …

• Wiener filter implementation


filter order, filter-bank implementation, …
Wiener filtering applications
• Application 1: noise reduction
• Application 2: channel equalization
• Application 3: time-alignment of multi-channel/-sensor
signals
Application 1: noise reduction (1)
• Time domain Wiener filter
- data model
y(m) = x(m) + n(m)

- main assumption: desired signal and noise are uncorrelated


- time domain Wiener filter:
Ryy = Rxx + Rnn
rxy = rxx
1
w = (Rxx + Rnn ) rxx

- noise correlation matrix is estimated during noise-only periods,


which requires signal activity detection
Application 1: noise reduction (2)
• Frequency domain Wiener filter
- data model
Y (f ) = X(f ) + N (f )

- main assumption: desired signal and noise are uncorrelated


- frequency domain Wiener filter:
PXX (f )
W (f ) =
PXX (f ) + PN N (f )

- interpretation in terms of signal-to-noise ratio (SNR):


SN R(f )
W (f ) =
SN R(f ) + 1
0

20 log W ( f )
–20

Application 1: noise reduction (3) –40

–60

• Frequency domain Wiener filter –80

SN R(f ) –100
W (f ) = –60 –40 –20 0 20 40 60
SN R(f ) + 1 SNR (dB)

- Wiener filter
Figure attenuates
8.4 Variation of the gain ofeach frequency
Wiener filter component
frequency response with SNR in
proportion to SNR

Signal 1.0
Signal and noise magnitude spectrum

Noise

Wiener filter magnitude W( f )


Wiener filter

0.0
Frequency ( f )

Figure 8.5 Illustration of the variation of Wiener frequency response with signal spectrum for additive white
noise. The Wiener filter response broadly follows the signal spectrum.
Application 1: noise reduction (4)
• Frequency domain Wiener filter
- noise can only be removed completely when desired signal
and noise spectra are SOME
separable
APPLICATIONS OF WIENER FILTERS 2

Magnitude Magnitude Signal


Noise
Spectra

Overlapped spectra
Separable

Frequency Frequency

(a) (b)

ure 8.6 Illustration of separability: (a) the signal and noise spectra do not overlap, the signal can be reco
a low-pass filter, (b) the signal and noise spectra overlap, the noise can be reduced but not completely rem
Wiener filtering applications
• Application 1: noise reduction
• Application 2: channel equalization
• Application 3: time-alignment of multi-channel/-sensor
signals
ven by
PXX "f #H ∗ "f #
Application 2: channel
P "f # !H"f #! +equalization
W"f # =
P "f # XX
2
NN
(8.66)

e it is assumed that the signal and the channel noise are uncorrelated. In the absence of channel
, PNN •"f #Frequency domain
= 0, and the Wiener filter Wiener
is simply filter
the inverse of the channel distortion model
data
# = H −1 "f-#. The model problem is treated in detail in Chapter 17.
equalisation

Y (f ) = X(f )H(f ) + N (f )
.5 Time-Alignment of Signals in Multichannel/Multi-sensor
- frequency domain Wiener filter = compromise between
stems channel equalization & noise reduction
ulti-channel/multi-sensor signal processing
PXX (f there
)Hare
⇤ a array of noisy and distorted versions of
(f )
W (f
nal x"m#, and the ) = is to use all the observations in estimating x"m#, as illustrated in
objective
PXX (f )|H(f )|2 + PN N (f )
noise n (m)

Distortion Equaliser
x (m) H( f )
y(m) –1
H (f) x^ (m)

f f

Figure 8.7 Illustration of a channel model followed by an equaliser.


Wiener filtering applications
• Application 1: noise reduction
• Application 2: channel equalization
• Application 3: time-alignment of multi-channel/-sensor
signals
Application 3: time-alignment of multi-
channel/-sensor signals (1)
• Multi-channel/-sensor signals:
- sensor array = collection of multiple sensors observing same
source signal x at different positions in space
- each sensor signal is filtered & noisy version of source signal
(linear filter hk, additive noise
SOME n k)
APPLICATIONS OF WIENER FILTERS 291

n1 (m)
x (m) y1 (m) xˆ (m)
h1 (m) w1 (m)

n2 (m)
x (m) y2 (m) xˆ (m)
h2 (m) w2 (m)

. .
. .
. nK (m) .
x (m) yK (m) xˆ (m)
hK (m) wK (m)

Figure 8.8 Illustration of a multi-channel system where Wiener filters are used to time-align the signals from
different channels.
Application 3: time-alignment of multi-
channel/-sensor signals (2)
• Wiener filter
- data model for simple example (K = 2, h1 = 1, h2 = Az-D):
y1 (m) = x(m) + n1 (m)
y2 (m) = Ax(m D) + n2 (m)
- Wiener filter error signal (y1 = input, y2 = desired signal):
P
X1
e(m) = y2 (m) wk y1 (m)
k=0
1
- time domain Wiener filter: w = (Rxx + Rn1 n1 ) Arxx (D)
- frequeny domain Wiener filter:
PXX (f ) j!D
W (f ) = Ae
PXX (f ) + PN1 N1 (f )
Les 7: Optimale filtering
• Introduction

• Least-squares and Wiener filter estimation


stochastic & deterministic estimation, computational aspects,
geometrical interpretation, performance analysis, frequency
domain formulation, …

• Wiener filtering applications


noise reduction, time alignment of multi-channel/-sensor signals,
channel equalization, …

• Wiener filter implementation


filter order, filter-bank implementation, …
8.7.1 Choice of Wiener Filter Order

Wiener filter implementation (1)


The choice of Wiener filter order affects:

(a) the ability of the filter to model and remove distortions and reduce the noise
• computational
(b) the Estimation of noise
complexity and noisy signal spectra
of the filter
(c) the numerical stability of the Wiener filter solution; a large filter may produce an ill-conditioned
- use of signal activity detector
large-dimensional correlation matrix in Equation (8.10).
- see Les 5: Detectieproblemen

Noisy signal
Noisy signal Spectral SNR(f)
W( f ) =
spectrum estimator SNR Estimator SNR(f)+1 Wiener filter
coefficient
Signal vector
activity
detector

Noise spectrum
estimator

Figure 8.9 Configuration of a system for estimation of frequency Wiener filter.


Wiener filter implementation (2)
• Filterbank implementation (see DSP-1)
- downsampling in subbandsIMPLEMENTATION
leads to OF complexity
WIENER FILTERS reduction
293

Y(f1)
BPF(f1)

.2 X2(f1)
W(f1) =
Y2(f1) X2(f1)
.. Y2(f1)

y(m)
ρ ˆ
x(m)
N2(f1)
Z–1

. .
. .
. .
Y(fN)
BPF(fN)

X2(fN)
.2 W(fN) =
Y2(fN)
2
Y (fN) 2
X (fN) ..

ρ
N2(fN)
–1
Z

Figure 8.10 A filter-bank implementation of a Wiener filter for additive noise reduction.
Wiener filter implementation (3)
• Choice of Wiener filter order

- Wiener filter order affects:


1. ability of filter to model and remove distortion and to reduce noise
2. computational complexity of filter
3. numerical stability of Wiener filter solution

- choice of model order is always trade-off between these three


criteria

You might also like