0% found this document useful (0 votes)
71 views

Recursive Least-Squares Algorithm (RLS) : September 30, 2020

The document summarizes the recursive least squares (RLS) algorithm. Key points: - RLS extends the method of least squares to develop a recursive algorithm for adaptive FIR filter design. - It uses a matrix inversion lemma to simplify computations compared to the LMS algorithm, achieving faster convergence. - This improved performance comes at the cost of higher computational complexity for RLS versus LMS. - The RLS algorithm estimates the filter weights by minimizing a cost function involving the error signal and a regularization term to stabilize the solution.

Uploaded by

jaffa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views

Recursive Least-Squares Algorithm (RLS) : September 30, 2020

The document summarizes the recursive least squares (RLS) algorithm. Key points: - RLS extends the method of least squares to develop a recursive algorithm for adaptive FIR filter design. - It uses a matrix inversion lemma to simplify computations compared to the LMS algorithm, achieving faster convergence. - This improved performance comes at the cost of higher computational complexity for RLS versus LMS. - The RLS algorithm estimates the filter weights by minimizing a cost function involving the error signal and a regularization term to stabilize the solution.

Uploaded by

jaffa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Recursive Least-Squares Algorithm (RLS)

Adaptive Filter Theory


Advanced Digital Signal Processing

September 30, 2020

1/17
Advanced Digital Signal Processing
General Introduction

• The method of least squares is extended to develop a recursive


algorithm for the design of adaptive FIR lters

• The given least-squares estimate of the weight vector at


iteration n, we may compute the weight vector at iteration
n+1 upon the arrival of new data set

• This is known as recursive least squares algorithm

2/17
Advanced Digital Signal Processing
RLS Algorithm

• It uses a matrix inversion lemma to simplify the computations

• An important feature of this algorithm is that its rate of


convergence is typically an order of magnitude faster than that
of the simple LMS algorithm

• Due to the fact that the RLS algorithm whitens the input data
by using the inverse correlation matrix of the data, assumed to
be of zero mean

• This improvement in performance, however, is achieved at the


expense of an increase in computational complexity of the RLS
algorithm.

3/17
Advanced Digital Signal Processing
RLS Algorithm

The cost function is formulated using the method of least squares


as
n
X
η(n) = β(n, i)|e(i)|2 (1)
i=1
where, n is the variable length of input data
where e(i) is the dierence between the desired response d(i) and
the output y(i) produced by an FIR lter whose tap inputs (at time
i) equal x(i), x(i − 1), ..., x(i − M + 1),

e(i) = d(i) − y(i) = d(i) − wT (n)x(n) (2)

where x(i) is the tap-input vector at time i, dened by

x(i) = [x(i), x(i − 1), ..., x(i − M + 1)]T (3)

and w(n) is the tap-weight vector at time n, dened by

w(n) = [w0 (n), w1 (n), · · · wM −1 (n)]T (4)


4/17
Advanced Digital Signal Processing
RLS Algorithm Contd...

The weighting factor has the property

0 < β(n, i) ≤ 1; i = 1, 2, · · · n (5)

The use of the weighting factor β(n, i), in general, is intended to


ensure that data in the distant past are forgotten in order to
aord the possibility of following the statistical variations of the
observable data. A special form of weighting that is commonly used
is the exponential weighting factor, or forgetting factor, dened by

β(n, i) = λ(n−i) ; i = 1, 2, · · · (6)

where λ is a positive constant close to, but less than, unity. When
λ = 1, we have the ordinary method of least squares. The inverse
of 1−λ is, roughly speaking, a measure of the memory of the
algorithm. The special case λ=1 corresponds to innite memory.

5/17
Advanced Digital Signal Processing
RLS Contd...

• The method of RLS is an ill posed inverse problem.

• We are given input data consisting of a tap-input vector x(n)


and the corresponding desired response d(n) for varying n
• The requirement is to estimate the unknown parameter vector
of a multiple linear regression model that relates d(n) to x(n).

6/17
Advanced Digital Signal Processing
The ill posed nature of RLS algorithm is due to the following
reasons:

• There is insucient information in the input data to


reconstruct the input-output mapping uniquely

• The unavoidable presence of noise or imprecision in the input


data adds uncertainty to the reconstructed input-output
mapping

To make the estimation problem well posed, some form of prior


information about the input-output mapping is needed. This, in
turn, means that formulation of the cost function must be
expanded to take the prior information into account.

7/17
Advanced Digital Signal Processing
To satisfy that objective, we expand the cost function to be
minimized as the sum of two components:

n
X
η(n) = λn−i |e(i)|2 + δλn ||w(n)||2 (7)
i=1

We assume the use of prewindowing, which means that the input


signal is windowed before application of the FIR lter. The two
components of the cost function are as follows:

• The sum of weighted squares

n n
λn−i |d(i) − wT (n)x(n)|
X X
λn−i |e(i)|2 = (8)
i=1 i=1

which is data dependent. This component measures the


exponentially weighted error between the desired response d(i)
and the actual response of the lter, y(i), which is related to
the tap-input vector x(i) by the
T
formula y(i) = w (i)x(i)
8/17
Advanced Digital Signal Processing
The regularizing term

δλn ||w(n)||2 = δλn wT (n)w(n) (9)

where δ is a positive real number called the regularization


parameter. Except for the factor δλn , the regularizing term
depends solely on the tap-weight vector w(n). The regularizing
term is included in the cost function to stabilize the solution to the
recursive least-squares problem by smoothing the solution.

9/17
Advanced Digital Signal Processing
Reformulation of M ×M correlation matrix by the addition of
regularization term as,

n
λn−i x(i)xT (i) + δλn I
X
Φ(n) = (10)
i=1

Note that the addition of the regularizing term also has the eect
of making the correlation matrix Φ(n) nonsingular at all stages of
the computation, starting from n = 0.
The M − by − 1 time-average cross-correlation vector z(n)
between the tap inputs of the FIR lter and the desired response is
unaected by the use of regularization, as is shown by the formula

n
X
z(n) = λn−i x(i)d(i) (11)
i=1

10/17
Advanced Digital Signal Processing
According to the method of least squares, the optimum value of the
M − by − 1 tap-weight vector ŵ(n), for which the cost function
η(n) attains its minimum value, is dened by the normal equations.
For the recursive least-squares problem, the normal equations are
written in matrix form as

Φ(n)ŵ(n) = z(n) (12)

The recursive computation of Φ(n) is formulated as

n−1
λn−1−i x(i)xT (i) + δλn−1 I] + xT (n)x(n)
X
Φ(n) = λ[ (13)
i=1

Φ(n) = λΦ(n − 1) + x(n)xT (n) (14)

11/17
Advanced Digital Signal Processing
Recursion for updating the cross-correlation vector between the tap
inputs and the desired response

z(n) = λz(n − 1) + x(n)d(n) (15)

To compute the least-square estimate ŵ(n) for the tap-weight


vector in accordance with Eq.(12), we have to determine the
inverse of the correlation matrix Φ(n).
This computation is removed with matrix inversion lemmma.

12/17
Advanced Digital Signal Processing
Matrix Inversion Lemma

Let A and B be two positive-denite M − by − M matrices related


by
A = B−1 + CD−1 CH (16)

where, C is an M ×N matrix and D is a N ×M matrix.


According to matrix inversion lemma, the inverse of matrix A can
be represented as

A−1 = B − BC(D + CH BC)−1 CH B (17)

13/17
Advanced Digital Signal Processing
Exponentially Weighted RLS Algorithm

A = Φ(n)
B−1 = λΦ(n − 1)
C = x(n)
D=1
Substituting this all in matrix inversion lemma, we obtain

λ−2 Φ−1 (n − 1)x(n)xH (n)Φ−1 (n − 1)


Φ−1 (n) = λ−1 Φ−1 (n − 1) −
1 + λ−1 xH (n)Φ−1 (n − 1)x(n)
(18)
For convenience of computation, let

P(n) = Φ−1 (n) (19)

λ−1 P(n − 1)x(n)


k(n) = (20)
1 + λ−1 xH (n)P(n − 1)x(n)

14/17
Advanced Digital Signal Processing
Using all the above expressions, we can write

P(n) = λ−1 P(n − 1) − λ−1 k(n)xH (n)P(n − 1) (21)

The M ×M matrix P(n) is referred to as inverse correlation


matrix.
M ×1 vector k(n) is referred to as the gain vector. Rearranging
eq.(20), we get

k(n) = λ−1 P(n − 1)x(n) − λ−1 k(n)xH (n)P(n − 1)x(n) (22)

k(n) = [λ−1 P(n − 1) − λ−1 k(n)xH (n)P(n − 1)]x(n) (23)

k(n) = P(n)x(n) (24)

Substituting P(n) = Φ−1 (n), we get

k(n) = Φ−1 (n)x(n) (25)

15/17
Advanced Digital Signal Processing
RLS Algorithm

Initialize the algorithm by setting


ŵ(0) = 0
P(0) = δ −1 I
For each time instant n
λ−1 P(n−1)x(n)
k(n) = 1+λ−1 xH (n)P(n−1)x(n)

(n) = d(n) − ŵH (n − 1)x(n)


ŵ(n) = ŵ(n − 1) + k(n)(n)
P(n) = λ−1 P(n − 1) − λ−1 k(n)xH (n)P(n − 1)

16/17
Advanced Digital Signal Processing
To initialize the RLS algorithm, we need to specify two quantities:

• The initial weight vector ŵ(0). The customary practice is to


set ŵ(0) = 0.
• The initial correlation matrix Φ(0). Setting n = 0, we nd
that, with the use of prewindowing, we obtain Φ(0) = δI,
where δ is the regularization parameter. The parameter δ
should be assigned a small value for high signal-to-noise ratio
(SNR) and a large value for low SNR.

17/17
Advanced Digital Signal Processing

You might also like