Exp5 Ee433
Exp5 Ee433
1. Introduction
In this experiment, optimum filtering concept is investigated. Optimum filtering uses the
statistical characteristics of the input signal to extract the useful information. Hence, some
statistical information regarding to the input and the desired signal should be known in
order to apply this type of filters.
2. Preliminary Work
The most common types of filter encountered in practice are deterministic filters in the form
of a lowpass, highpass, etc. While these filters are easily designed and used, they are far
from performing in the best manner especially when there is some information regarding to
the signal statistics. Optimum filters perform the best with respect to a certain optimality
criteria and they are designed using the signal statistics. While there are several other
optimality criteria, we concentrate on a special one, namely the mean square error, MSE. In
the following part, the theoretical background for MSE optimum filters or Wiener Filters is
presented.
The problem for MSE optimum filters can be outlined as follows. Given the input sequence,
xn , desired response, d n and the form of the filter structure, i.e.,
yn hk xn k , n 0,1,... (1)
k 0
the target is to find filter coefficients, hk , such that the MSE is minimized. Note that, y[n] is
the Wiener filter output. While it is possible to have IIR Wiener filters, FIR Wiener filters are
considered in the following parts for simplicity. The definitions for the error and MSE are
given as follows, i.e.,
where E . is the expectation operator. In order to find the optimum coefficients, derivative
of J with respect to hk should be equated to zero, i.e.,
J en
k J 2 E en 2 Eenxn k 0 (3)
hk hk
Above is the principle of orthogonality. In other words, J attains its minimum iff the
estimation error en is orthogonal to the input, xn . If we insert, en into (3), we obtain,
E d n hi xn i xn k 0
i 0
hiR k i r k ,
i 0
x dx k 0,1,2,....
In the above equation, R x . is the autocorrelation function of the input and rdx . is the
cross-correlation function of the desired and input signals. Let Rx denote the autocorrelation
matrix of the input where Rx k i is the k th row and i th column element of Rx. (Here, index
of columns and zeros of autocorrelation matrix are assumed to start from index 0). Let rdx is
the cross-correlation vector of the desired and input signals where rdx k is the k th element
of rdx, again starting from 0 index. The last equation in (4) is called as the Wiener-Hopf
equation and can be more compactly written in terms of matrix-vector notation as,
R x h rdx (5)
P 1
MSEo e2 Rd 0 h T rdx Rd 0 hmrdx m (6)
m 0
where Rd 0 is the 0th term of the autocorrelation sequence of the desired signal, and P is
the length of the FIR Wiener filter. Hence (6) can be used to compute theoretical minimum
MSE. It is also possible to compute least squares error which is a deterministic counterpart
of MSE which is computed by using signal samples, i.e.,
N
n0
Wiener filters can be used in a variety of applications. Filtering of a signal in noise (or noise
removal), prediction of a signal in noise, smoothing of a signal in noise, and linear prediction
are some examples of these applications. In this experiment, two applications namely the
filtering of a signal in noise and prediction are considered. In the following part, examples of
such applications are given. However, first a signal with a known correlation function is
discussed.
A random process s[n] under certain condition can be represented as in Fig. 1. It is obtained
as the output of a filter when the filter input is white Gaussian noise. This is known as
innovations representation of random process s[n].
White Gaussian
Noise sn
H z
1
H z 1 0.2z 1 (8)
Then it is possible to show that the autocorrelation sequence for sn is given as,
Assume that signal is corrupted by white Gaussian sequence vn where R m 2m.
v v
Therefore observed signal, xn, is given as,
In this case, desired signal is d n sn and Rd m Rs m. Note that the problem is as
follows. Given the noisy signal xn , its correlation function, Rx m and the cross-correlation
rdx m , find the MSE optimum filter coefficients hk such that the filter output is the desired
signal. The desired filter length is given as P 3 . It is possible to write rdx m and Rx m as
follows, i.e.,
Rx m E( sn vn)(sn m vn m) Rs m Rv m 0.2 m v2 m (12)
1
0.96
Note that vn and sn are assumed to be uncorrelated. Using the Wiener-Hopf equations in
(4) or (5), we obtain,
Note that correlation matrix is Hermitian symmetric, i.e., Rx 1 Rx 1 . Once the
*
numerical values for the above expression are used, Wiener filter coefficients can be easily
obtained through matrix inversion, i.e., h opt R x1rdx . Theoretical MSE is obtained from (6).
In this case, the problem is to predict the signal samples before K -steps. Hence the desired
signal is d n sn K where K =2 is selected for simplicity. Rx m does not change and
it is the same as in (12). rdx m changes and it is given below,
2
MSE Rd 0 hm 0.2 m2
1
(15)
m 0 0.96
PART 1
MATLAB Programming Tasks
(
pole of the innovations filter, a , H z
1
1 az 1
)
standard deviation of the noise, v ,
the prediction step, Pstep.
iii) Obtain the desired signal as shown in Fig. 1 and equation (8).
vi) Obtain Wiener filter output, y[n], by convolving x[n] with h[n]. Obtain the LSE
from the samples using (7). Compare it with MSE using the equation (6). Plot s[n],
x[n], y[n], and e[n].
vii) Design a lowpass filter using windowing technique (or Parks-Mcclellan algorithm)
and filter x[n] with this new filter to obtain g[n]. Compute LSE for g[n] and
compare with the previous results. Explain your findings.
viii) Repeat iii-vii for prediction of a signal in noise by selecting Pstep=2. Explain your
plots and findings. What happens when Pstep is increased?
In this part, you need to program MyRIO for the optimum filtering similar to the MATLAB
implementation. The programming can use MyRIO CPU and hence the project structure may
look like as shown in Fig. 2.
The front panel for the Experiment 5 may look like the one shown in Fig. 3. The input
parameters for the system are described in MATLAB section and they are the same in MyRIO
implementation. “milliseconds to wait” is added to slow down the execution in order to
have a better visualization of the waveforms. The Wiener filter coefficients are reported as
in Fig.3. The theoretical MSE, LSE of the Wiener filter and LSE of the lowpass Parks-Mcclellan
(PM) filter are shown as well. The parameters of the PM filter are also presented. The two
plots show the time response of the Wiener filter as well as the error signals.
You can use Gaussian White Noise.vi function in Fig. 4 to generate random
Gaussian white noise.
In order to generate autocorrelation matrix of the input, you can use Create
Special Matrix.vi shown in Fig. 7 with matrix type as Toeplitz.
Programming Tasks
b) In addition to the two plots in Fig. 3, add two plots for the frequency characteristics
of the Wiener filter and PM filter.
c) Let the innovations process pole, a=0.8, Wiener filter length=3 and P step=0. Obtain
the filter coefficients, MSE and LSE values. Also include the waveform plots in your
report. Can you design a PM filter which gives better LSE than Wiener filter?
d) Let Pstep=2 and repeat c. Explain the differences between the results obtained in c
and d.
f) Now increase the length of the Wiener filter one by one by noting the MSE. Explain
the MSE result as the filter length increases.