Recursive Least-Squares Algorithm (RLS) : September 30, 2020
Recursive Least-Squares Algorithm (RLS) : September 30, 2020
1/17
Advanced Digital Signal Processing
General Introduction
2/17
Advanced Digital Signal Processing
RLS Algorithm
• Due to the fact that the RLS algorithm whitens the input data
by using the inverse correlation matrix of the data, assumed to
be of zero mean
3/17
Advanced Digital Signal Processing
RLS Algorithm
where λ is a positive constant close to, but less than, unity. When
λ = 1, we have the ordinary method of least squares. The inverse
of 1−λ is, roughly speaking, a measure of the memory of the
algorithm. The special case λ=1 corresponds to innite memory.
5/17
Advanced Digital Signal Processing
RLS Contd...
6/17
Advanced Digital Signal Processing
The ill posed nature of RLS algorithm is due to the following
reasons:
7/17
Advanced Digital Signal Processing
To satisfy that objective, we expand the cost function to be
minimized as the sum of two components:
n
X
η(n) = λn−i |e(i)|2 + δλn ||w(n)||2 (7)
i=1
n n
λn−i |d(i) − wT (n)x(n)|
X X
λn−i |e(i)|2 = (8)
i=1 i=1
9/17
Advanced Digital Signal Processing
Reformulation of M ×M correlation matrix by the addition of
regularization term as,
n
λn−i x(i)xT (i) + δλn I
X
Φ(n) = (10)
i=1
Note that the addition of the regularizing term also has the eect
of making the correlation matrix Φ(n) nonsingular at all stages of
the computation, starting from n = 0.
The M − by − 1 time-average cross-correlation vector z(n)
between the tap inputs of the FIR lter and the desired response is
unaected by the use of regularization, as is shown by the formula
n
X
z(n) = λn−i x(i)d(i) (11)
i=1
10/17
Advanced Digital Signal Processing
According to the method of least squares, the optimum value of the
M − by − 1 tap-weight vector ŵ(n), for which the cost function
η(n) attains its minimum value, is dened by the normal equations.
For the recursive least-squares problem, the normal equations are
written in matrix form as
n−1
λn−1−i x(i)xT (i) + δλn−1 I] + xT (n)x(n)
X
Φ(n) = λ[ (13)
i=1
11/17
Advanced Digital Signal Processing
Recursion for updating the cross-correlation vector between the tap
inputs and the desired response
12/17
Advanced Digital Signal Processing
Matrix Inversion Lemma
13/17
Advanced Digital Signal Processing
Exponentially Weighted RLS Algorithm
A = Φ(n)
B−1 = λΦ(n − 1)
C = x(n)
D=1
Substituting this all in matrix inversion lemma, we obtain
14/17
Advanced Digital Signal Processing
Using all the above expressions, we can write
15/17
Advanced Digital Signal Processing
RLS Algorithm
16/17
Advanced Digital Signal Processing
To initialize the RLS algorithm, we need to specify two quantities:
17/17
Advanced Digital Signal Processing