0% found this document useful (0 votes)
41 views

RGA LMA NLMA NGA Algorithms

1) The document discusses algorithms for estimating time delay between two signals measured by sensors, including the LMS adaptive filter algorithm. It uses an FIR filter structure with adaptive coefficients to minimize the error between the filter output and the delayed signal. 2) Blind signal processing algorithms are described for cancelling noise that is correlated with a reference signal, using an adaptive filter with transfer function matching the noise generation. 3) Identification of linear systems is discussed, modeling the system as a linear regression with parameters estimated recursively using the least squares method.

Uploaded by

Jnsk Srinu
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

RGA LMA NLMA NGA Algorithms

1) The document discusses algorithms for estimating time delay between two signals measured by sensors, including the LMS adaptive filter algorithm. It uses an FIR filter structure with adaptive coefficients to minimize the error between the filter output and the delayed signal. 2) Blind signal processing algorithms are described for cancelling noise that is correlated with a reference signal, using an adaptive filter with transfer function matching the noise generation. 3) Identification of linear systems is discussed, modeling the system as a linear regression with parameters estimated recursively using the least squares method.

Uploaded by

Jnsk Srinu
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 13

Algorithms and Derivations

Part 1: Time Delay Estimate Problem:


We consider the use of LMS adaptive filter for the estimation of time delay between two
measured signals. This is a problem which occurs in a diverse range of applications including
radar, sonar, and biomedical signal analysis among them. In the simplest arrangement
measurements x(i) and d(i), i0, are made at sensors S1, S2 say separated by a distance r.

S1

S2

x(i
)
d(i
)

D
= Angle of arrival
D= The delay between x(i) and
d(i) = x(i-D)
Fig.1

The form of measurement sensor varies accordingly to the application. In sonar,


for example, the measurements are generated at the output of two hydrophones. The delay D
is related to the angle of arrival of the signal through simple geometry. Reffering to above
figure we have,
D = (r sin)/
Where is the propagation velocity of the signal through the medium. Thus, the estimation of
bearing angle is reduced to the estimation of delay D.
The structure of adaptive filter is shown in below figure,

d(i)=x(i-D)

S2

++....+++.........+

S1

y(i)
e(i)

ADAPTIVE ALGORITHM

Fig.2
In above figure, Z-1 is the unit delay line, i.e.,
Z-1 x(i) = x(i-1), i0
The order of the FIR filter is L. we choose LD.
Least Mean Square Algoritm:
From above figure we have
y(i) = ( 0+ 1z-1+..+ Dz-D+ Lz-L)x(i)
= ( 0x(i)+ 1x(i-1)+..+ Dx(i-D)+ Lx(i-L)
= [( 0+ 1+..+ D+ L] [x(i) x(i-1) . X(i-D).x(i-L)] =

(i)

(1)
Parameter vector is an estimate of optimal parameters f0, obtained based on the
measurements (observations) upto time i-1. Thus we may write
y(i)= (i-1)T(i)
where
(i-1)T = [ 0(i-1), 1x(i-1),.., Dx(i-1), Lx(i-1)]

(2)

Note that the error e(i) is given by


e(i)=d(i)-y(i)=d(I)- (i-1)T(i)
It is not difficult to see that the vector of optimal parameters is
T

=[0,0,0,1,0,.0]

When f(i)
I

f0 , then e(i)

The LMS algorithm is given as follows


(i)= (i-1) + (i) e(i)
With (i), and e(i) defined by (1) and (3), respectively.
Instead of (5) we can use the following normalized gradient algorithm.
Normalized Gradient Algorithm:
(i)= (i-1) + [ (i) e(i)]/r(i) ,

0<<2

Where r(I) is normalization sequence satisfying

<= 1.

Now r(i) can be considered as


r(i)=c +||(i)||2 , c>0
r(i)=r(i-1) + ||||2 , 0<<1
r(i)=r0 + max||(i)||2 , ro>0.

Part 2 & 3: Noise Cancellation Problem:


Blind signal processing is a set of algorithms that are applied on a recorded signal without
any prior knowledge about the underlying system. This family of algorithms plays an
important role in many applications, such as communications, engineering tasks, and data
analysis.
V(n)

S(n)

D(n)

X(n
)

Y(n)

E(n)
S(n)

ADAPTIVE FILTER
V(n)

ESTIMATION ALGORITHM

Fig.3

d(n)=primary input
x(n)=reference signal
Measurable signals: d(n) and x(n)
7

Unmeasurable signals: v(n) and (n)


(n) is a useful signal
v(n) is a noise that we like cancel
v(n) and x(n) are generated by the source and therefore they are correlated.

We assume that the correlation of given by the following relation:


V(n)=F(q-1)x(n),
F(q-1)=f0+f1 q-1+.+fL q-L
Note that d(n)=v(n)+ (n)
= F(q-1) x(n) + (n)
Then it is obvious that of the filter in above figure has transfer function F(q-1), the output
y(n)= F(q-1) x(n)=v(n) becomes e(n)=d(n)-y(n)= (n).
In the practical situations F(q-1) is not known and adaptive filter will be used.
S(n)

D(n)

V(n)

E(n)

x(N)
(n-1)+(n-2)*q-1+............+(n-1)*qL

Y(n)

(n-1)
ESTIMATION
ALGORITHM

Fig.4

Parameter,

, i=0,1,L of the adaptive filter will be estimated and updated at every

sampling interval =0,1, to that the error e(n).


y(n)= (q-1 )x(n)
=(
=

(q-L))

)+...+

[x(n) x(n-1) x(n-L)]

(3)

e(n)=d(n)-y(n)=d(n)T

=d(n)-

(n)=

(n)]

NLMS

(4)

Least Mean Square Estimate:


Let us choose

(i+1) so that it minimizes the sum of the squared-errors of the fit.


(k)T

V( ) =

(i+1))2

(5)

A necessary condition for a local minimum is


=0,
Which gives
(k)T

(i+1)) (-(k) =0

(6)

Wherefrom it follows that


T

(i+1) =

(7)

Since the V( ) is
(i)

Non-negative for all ,

(ii)

A quadratic function in , it follows that the necessary condition (7) for a local
minimum is also a sufficient condition for a global minimum.

Hence a least-square estimate of 0 is any estimate

(i+1) which satisfies (7). There may be

more than one minimizer of V( ). However, of the matrix


T

R(i) =

(8)

Is invertible, then there is an unique least square estimate.


(i+1) = R(i)-1

))

(9)

The matrix R(i) is called the Covariance matrix, a term inspired by its form(8).
The Ideal Case:
Suppose now that in above equation, d(i)=0.
Hence the system is exactly described by
Y(i+1) = (i)T 0
Then, V(0) = 0
Since V( ) 0 for all , it follows that 0 minimizes V( ). If R(i) is invertob;e, we have
seen that the least square estimate which minimizes V( ) is unique. Hence,

(i+1) = 0

The condition that R(i) is invertibleis equivalent to saying that set of simultaneous linear
equations.
Y(1) =

Y(2) =

..
Y(i+1) =

Has a unique solution


data {y(k+1),

= 0. When R(i) is not invertible this is equivalent to saying that the


} is not sufficient to uniquely determine 0.

RECURSIVE LEAST SQUARES ALGORITHM:


The Recursive least squares (RLS) adaptive filter is an algorithm which recursively finds the
filter coefficients that minimize a weighted linear least squares cost function relating to the
input signals.
We consider the least squares estimate
10

)^(i+1) =

y(k+1)

(10)

We make the above equation into simple format by converting into recursive form by
updating from ^ to ^(i+1) given below :

R(i) = R(i-1) +
Then from R(i) ^(i+1) =
=

(11)

y(k+1)
y (k+1) + (i) y(i+1)

= R(i-1) ^(i) + (i) y(i+1)

(from 10)

= (R(i) - (i) (i) T) ^(i) + (i) y(i+1)

(from 11)

We now get,
R(i) ) ^(i+1)= R(i) ^(i)- (i) (i) T ^(i)+ (i) y(i+1)..

(12)

R(i) ) ^(i+1) = R(i) ^(i)- (i) [y(i+1)- (i) T ^(i)]


Thus,
^(i+1)= ^(i)+R(i)-1 (i) [y(i+1)- (i) T ^(i)]...

(13)

The recursive is now starts at time equal to 0 with some ^(i) and R(0),the matrix R(0) is
selected as positive and symmetric.
The estimate provided by recursions are,
R(i)= R(i-1)+ (i) (i) T
Are called Recursive least square estimate (RLS).

Part 4: Identification of Linear Models:


Consider a single input, single output system described by the following difference equation,
Y(i+1) + a1y(i) ++ an y(i+1-n) = b1 u(i) + b2 u(i-1)+..+ bm u(i+1-m),
(1) i= 0,1,2,..
Where u(i) and y(i) are system input and output, respectively, which q -1 is the unit delay
operator, i.e., q-1y(t) = y(t-1). Operator representation of the system (1) is
(1+ a1q-1+..+ anq-n) y(i+1) = (b1+ b2 q-1 +.+bm q-m+1) u(i)

(14)

Or in the compact form


Y(i+1) =

u(i)

(15)
11

With
A(q-1) = 1+ a1q-1+..+ anq-n
B(q-1) = b1+ b2 q-1 +.+bm q-m+1
In (3), we refer to

(16)

as the transfer operator of the system (1).

Define the vector of system parameters as


0T = [-a1, -a2, ., -an; b1,, bm]

(17)

And the signal vector


(i)T = [y(i),.,y(i+1-n); u(i),.,u(i+1-m)]

(18)

Then the system (1) can be written


Y(i+1) = (i)T 0

(19)

The more general system model is


Y(i+1) = (i)T 0 + d(i)

(20)

Where the term d(i) represents the cumulative effects of disturbance, unmodelled dynamics,
measurement error, etc.
Note the linear dependence between the system output and parameter vector 0. We assume
that the parameters are unknown, and for following question. Given observations {(0),
(1),, (i);y(i), y(2),., y(i+1)}, which is an estimate of 0. If we denote this estimate by
(i+1), following identification can be constructed.
(i+1) =

(i+1)T (i)

(21)

Estimate (i+1), is determined so that in a certain sense model output

(i+1) matches the

system output y(i+1)


Least Square Estimate:
Let us choose
V( ) =

(i+1) so that it minimizes the sum of the squared-errors of the fit.


(k)T

(i+1))2

(10)
A necessary condition for a local minimum is

=0,

Which gives
12

(k)T

(i+1)) (-(k) =0

(22)

Wherefrom it follows that


T

(i+1) =

(23)

Since the V( ) is
(i)

Non-negative for all ,

(ii)

A quadratic function in , it follows that the necessary condition (23) for a local
minimum is also a sufficient condition for a global minimum.

Hence a least-square estimate of 0 is any estimate

(i+1) which satisfies (23). There may be

more than one minimizer of V( ). However, of the matrix


T

R(i) =

(24)

Is invertible, then there is an unique least square estimate.


(i+1) = R(i)-1

))

(25)

The matrix R(i) is called the Covariance matrix, a term inspired by its form(13).
The Ideal Case:
Suppose now that in (8), d(i)=0.
Hence the system is exactly described by
Y(i+1) = (i)T 0
Then, V(0) = 0
Since V( ) 0 for all , it follows that 0 minimizes V( ). If R(i) is invertob;e, we have
seen that the least square estimate which minimizes V( ) is unique. Hence,

(i+1) = 0

The condition that R(i) is invertibleis equivalent to saying that set of simultaneous linear
equations.
Y(1) =

Y(2) =

..
13

Y(i+1) =

Has a unique solution

= 0. When R(i) is not invertible this is equivalent to saying that the

data {y(k+1),

} is not sufficient to uniquely determine 0.

The Recursive Least Square Estimate:


Consider the least squares estimate
T

(i+1) =

(26)

We wish to determine a recursive form of this estimate which is suitable for easily updating
(i) to

(i+1).
T

Recalling that R(i) =

), we see that

R(i) = R(i-1) + (i) (i)T

(27)

Then from (15)


R(i)

(i+1) =

)
) + (i)y(i+1)
= R(i-1)

(i) + (i)y(i+1)

= (R(i) (i) (i)T)


Thus R(i)
Then,

(i+1) = R(i)

(i) + (i)y(i+1)

(i) + (i) [y(i+1) - (i)T)

(i+1) = (i) + R(i)-1 (i) [y(i+1) - (i)T)

Thus, from stored values of

(i)]

(i)]

(28)
(29)

(i) and R(i), and now observations y(i+1) and (i), by using

(16) and (18) one can calculate the new values

(i+1) and R(i).

For a precise implementation of the least squares estimates, one needs to initialize the
recursions with the correct least squares estimates

(i0) and R(i0-1).

14

Calculated from (16) and (18) at a time i0 for which

) is invertible.

In practise however, the recursive procedure is initialized at time 0 with some

(0) and R(0).

The matrix R(0) is Chosen to the symmetric and positive definite. Note that then every R(i)
for i0 is also symmetric and positive definite. The estimate provided by the recursions so
initialized,
(i+1) =

(i) + R(i)-1 (i) [y(i+1) - (i)T)

(i)]

(30)

R(i) = R(i-1) +

(31)

are called the recursive least squares Estimate (RLS).

Part 5:
A: Consider an adaptive system,
Y(i+
1)

U(i)
-1

b0+b1q +.+bLq

-L

Fig.5

(i) = [b0 b1 bL]

(i) = [u(i) u(i-1) . U(i-L)]


W(i+1
)

U(i)

Y(i+1)

e(i+1
)

Adaptive
estimator

Y^(i+
1)
Fig.6

Here is the scalar parameter.


To estimate this scalar parameter, the mean square error must be minimized
i.e.,

15

to minimize error, derivative of the error is equal to zero.

=0
=

=
= (i+1) p(i)-1
= (i+1) p(i-1)-1
y(i+1)u(i) = (i+1) p(i)-1 - (i+1) p(i-1)-1
p(i)-1 =
p(i-1)-1 =
p(i)-1 - p(i-1)-1 = u(i).^2 = p(i-1)-1 = p(i)-1 - u(i).^2
y(i+1)u(i) =

(i+1)p(i)-1 -

(i)( p(i)-1 - u(i).^2)

y(i+1)u(i) =

(i+1)p(i)-1 -

(i)p(i)-1 +

y(i+1)u(i) -

(i)u(i).^2) =

(i+1)p(i)-1 -

[u(i)(y(i+1) -

(i)u(i)) +

(i)p(i)-1 =

(i)u(i).^2)
(i)p(i)-1

(i+1)p(i)-1 ]* p(i)-1

RLS Algorithm is
u(i)*p(i)*[y(i+1) -

(i)u(i)] +

(i) =

(i+1)

p(i) = p(i-1) LMS Algorithm is,


(i+1) = (i) - *(i)*[y(i+1) ( (i)*(i))]
16

NLMS Algorithm is,


(i+1) = (i) [

C:
By considering the RLS Algorithm,

Here

(i+1) 0 = ( (i) - 0)- *(i)*[y(i+1) ( (i)*(i))]


=
P(i)-1*

p(i)* (i)* (i) *


= P(i)-1*

(i)* (i) *

P(i-1)-1 =
P(i)-1*
P(i)-1*

= P(i-1)-1 + (i)* (i) *


= P(i-1)-1*

(i)* (i) *

+ (i)* (i) *

Xk+1 = Xk , k=0,1,2,
Then, Xk+1 = X1
P(i)-1*

= P(0)-1*

, for all i0

= p(i)* p(0)-1 *
(i) 0

Therefore

= if

and only if

17

You might also like