RGA LMA NLMA NGA Algorithms
RGA LMA NLMA NGA Algorithms
S1
S2
x(i
)
d(i
)
D
= Angle of arrival
D= The delay between x(i) and
d(i) = x(i-D)
Fig.1
d(i)=x(i-D)
S2
++....+++.........+
S1
y(i)
e(i)
ADAPTIVE ALGORITHM
Fig.2
In above figure, Z-1 is the unit delay line, i.e.,
Z-1 x(i) = x(i-1), i0
The order of the FIR filter is L. we choose LD.
Least Mean Square Algoritm:
From above figure we have
y(i) = ( 0+ 1z-1+..+ Dz-D+ Lz-L)x(i)
= ( 0x(i)+ 1x(i-1)+..+ Dx(i-D)+ Lx(i-L)
= [( 0+ 1+..+ D+ L] [x(i) x(i-1) . X(i-D).x(i-L)] =
(i)
(1)
Parameter vector is an estimate of optimal parameters f0, obtained based on the
measurements (observations) upto time i-1. Thus we may write
y(i)= (i-1)T(i)
where
(i-1)T = [ 0(i-1), 1x(i-1),.., Dx(i-1), Lx(i-1)]
(2)
=[0,0,0,1,0,.0]
When f(i)
I
f0 , then e(i)
0<<2
<= 1.
S(n)
D(n)
X(n
)
Y(n)
E(n)
S(n)
ADAPTIVE FILTER
V(n)
ESTIMATION ALGORITHM
Fig.3
d(n)=primary input
x(n)=reference signal
Measurable signals: d(n) and x(n)
7
D(n)
V(n)
E(n)
x(N)
(n-1)+(n-2)*q-1+............+(n-1)*qL
Y(n)
(n-1)
ESTIMATION
ALGORITHM
Fig.4
Parameter,
(q-L))
)+...+
(3)
e(n)=d(n)-y(n)=d(n)T
=d(n)-
(n)=
(n)]
NLMS
(4)
V( ) =
(i+1))2
(5)
(i+1)) (-(k) =0
(6)
(i+1) =
(7)
Since the V( ) is
(i)
(ii)
A quadratic function in , it follows that the necessary condition (7) for a local
minimum is also a sufficient condition for a global minimum.
R(i) =
(8)
))
(9)
The matrix R(i) is called the Covariance matrix, a term inspired by its form(8).
The Ideal Case:
Suppose now that in above equation, d(i)=0.
Hence the system is exactly described by
Y(i+1) = (i)T 0
Then, V(0) = 0
Since V( ) 0 for all , it follows that 0 minimizes V( ). If R(i) is invertob;e, we have
seen that the least square estimate which minimizes V( ) is unique. Hence,
(i+1) = 0
The condition that R(i) is invertibleis equivalent to saying that set of simultaneous linear
equations.
Y(1) =
Y(2) =
..
Y(i+1) =
)^(i+1) =
y(k+1)
(10)
We make the above equation into simple format by converting into recursive form by
updating from ^ to ^(i+1) given below :
R(i) = R(i-1) +
Then from R(i) ^(i+1) =
=
(11)
y(k+1)
y (k+1) + (i) y(i+1)
(from 10)
(from 11)
We now get,
R(i) ) ^(i+1)= R(i) ^(i)- (i) (i) T ^(i)+ (i) y(i+1)..
(12)
(13)
The recursive is now starts at time equal to 0 with some ^(i) and R(0),the matrix R(0) is
selected as positive and symmetric.
The estimate provided by recursions are,
R(i)= R(i-1)+ (i) (i) T
Are called Recursive least square estimate (RLS).
(14)
u(i)
(15)
11
With
A(q-1) = 1+ a1q-1+..+ anq-n
B(q-1) = b1+ b2 q-1 +.+bm q-m+1
In (3), we refer to
(16)
(17)
(18)
(19)
(20)
Where the term d(i) represents the cumulative effects of disturbance, unmodelled dynamics,
measurement error, etc.
Note the linear dependence between the system output and parameter vector 0. We assume
that the parameters are unknown, and for following question. Given observations {(0),
(1),, (i);y(i), y(2),., y(i+1)}, which is an estimate of 0. If we denote this estimate by
(i+1), following identification can be constructed.
(i+1) =
(i+1)T (i)
(21)
(i+1))2
(10)
A necessary condition for a local minimum is
=0,
Which gives
12
(k)T
(i+1)) (-(k) =0
(22)
(i+1) =
(23)
Since the V( ) is
(i)
(ii)
A quadratic function in , it follows that the necessary condition (23) for a local
minimum is also a sufficient condition for a global minimum.
R(i) =
(24)
))
(25)
The matrix R(i) is called the Covariance matrix, a term inspired by its form(13).
The Ideal Case:
Suppose now that in (8), d(i)=0.
Hence the system is exactly described by
Y(i+1) = (i)T 0
Then, V(0) = 0
Since V( ) 0 for all , it follows that 0 minimizes V( ). If R(i) is invertob;e, we have
seen that the least square estimate which minimizes V( ) is unique. Hence,
(i+1) = 0
The condition that R(i) is invertibleis equivalent to saying that set of simultaneous linear
equations.
Y(1) =
Y(2) =
..
13
Y(i+1) =
data {y(k+1),
(i+1) =
(26)
We wish to determine a recursive form of this estimate which is suitable for easily updating
(i) to
(i+1).
T
), we see that
(27)
(i+1) =
)
) + (i)y(i+1)
= R(i-1)
(i) + (i)y(i+1)
(i+1) = R(i)
(i) + (i)y(i+1)
(i)]
(i)]
(28)
(29)
(i) and R(i), and now observations y(i+1) and (i), by using
For a precise implementation of the least squares estimates, one needs to initialize the
recursions with the correct least squares estimates
14
) is invertible.
The matrix R(0) is Chosen to the symmetric and positive definite. Note that then every R(i)
for i0 is also symmetric and positive definite. The estimate provided by the recursions so
initialized,
(i+1) =
(i)]
(30)
R(i) = R(i-1) +
(31)
Part 5:
A: Consider an adaptive system,
Y(i+
1)
U(i)
-1
b0+b1q +.+bLq
-L
Fig.5
U(i)
Y(i+1)
e(i+1
)
Adaptive
estimator
Y^(i+
1)
Fig.6
15
=0
=
=
= (i+1) p(i)-1
= (i+1) p(i-1)-1
y(i+1)u(i) = (i+1) p(i)-1 - (i+1) p(i-1)-1
p(i)-1 =
p(i-1)-1 =
p(i)-1 - p(i-1)-1 = u(i).^2 = p(i-1)-1 = p(i)-1 - u(i).^2
y(i+1)u(i) =
(i+1)p(i)-1 -
y(i+1)u(i) =
(i+1)p(i)-1 -
(i)p(i)-1 +
y(i+1)u(i) -
(i)u(i).^2) =
(i+1)p(i)-1 -
[u(i)(y(i+1) -
(i)u(i)) +
(i)p(i)-1 =
(i)u(i).^2)
(i)p(i)-1
(i+1)p(i)-1 ]* p(i)-1
RLS Algorithm is
u(i)*p(i)*[y(i+1) -
(i)u(i)] +
(i) =
(i+1)
C:
By considering the RLS Algorithm,
Here
(i)* (i) *
P(i-1)-1 =
P(i)-1*
P(i)-1*
(i)* (i) *
+ (i)* (i) *
Xk+1 = Xk , k=0,1,2,
Then, Xk+1 = X1
P(i)-1*
= P(0)-1*
, for all i0
= p(i)* p(0)-1 *
(i) 0
Therefore
= if
and only if
17