0% found this document useful (0 votes)
46 views11 pages

Convergence and Stability Analysis of Spline Adaptive Filtering Based On Adaptive Averaging Step-Size Normalized Least Mean Square Algorithm

The document proposes an adaptive averaging step-size normalized least mean square algorithm for spline adaptive filtering. It combines spline adaptive filtering with an adaptive averaging step-size scheme and normalized least mean square approach to achieve fast convergence. The algorithm adaptively modifies the step-size parameter based on the autocorrelation between the previous and present estimate errors. Analysis shows the trajectories of the step-size parameters converge to their equilibria despite variations in initial settings. Simulation results demonstrate the proposed algorithm has more robust performance in mean square error and faster convergence compared to conventional spline adaptive filtering.

Uploaded by

Adriano Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views11 pages

Convergence and Stability Analysis of Spline Adaptive Filtering Based On Adaptive Averaging Step-Size Normalized Least Mean Square Algorithm

The document proposes an adaptive averaging step-size normalized least mean square algorithm for spline adaptive filtering. It combines spline adaptive filtering with an adaptive averaging step-size scheme and normalized least mean square approach to achieve fast convergence. The algorithm adaptively modifies the step-size parameter based on the autocorrelation between the previous and present estimate errors. Analysis shows the trajectories of the step-size parameters converge to their equilibria despite variations in initial settings. Simulation results demonstrate the proposed algorithm has more robust performance in mean square error and faster convergence compared to conventional spline adaptive filtering.

Uploaded by

Adriano Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Received: November 28, 2019. Revised: January 27, 2020.

267

Convergence and Stability Analysis of Spline Adaptive Filtering based on


Adaptive Averaging Step-size Normalized Least Mean Square Algorithm

Adisorn Saenmuang1* Suchada Sitjongsataporn2

1
The Electrical Engineering Graduate Program, Faculty of Engineering,
Mahanakorn University of Technology, Bangkok Thailand 10530
2
Department of Electronic Engineering, Mahanakorn Institute of Innovation, Faculty of Engineering,
Mahanakorn University of Technology, Bangkok Thailand 10530
* Corresponding author’s Email: [email protected]

Abstract: This paper presents a normalized version of least mean square algorithm with an adaptive averaging step-
size approach for spline adaptive filter. The use of an adaptive averaging step-size mechanism is to modify on
autocorrelation between previous and present estimate error of system for updating step-size parameter. For
achieving fast convergence, the proposed spline adaptive filter is combined with adaptive averaging step-size scheme
and normalized version of least mean square approach. The convergence analysis and stability properties are
accomplished. Simulation results of experiments depict that the trajectories of step-size parameters of the proposed
algorithm converge to their own equilibria in spite of large variations in initial step-size settings. Proposed algorithm
demonstrates more robust performance in mean square error and fast convergence compared with the conventional
spline adaptive filter.
Keywords: Spline adaptive filtering, Nonlinear network, Normalized least mean square algorithm.

models as linear-nonlinear-linear and nonlinear-


1. Introduction linear-nonlinear models based on SAF structure,
Linear adaptive filtering is widely used for the which can optimize using gradient-based condition
solution to simply determine with the suitable in many conventional solutions of application [6, 7].
constraint [1]. In opposition, many practical models For nonlinear system identification, the authors
necessitate the use of nonlinear adaptive filter in in [8-10] conducted the normalized version of LMS
which nonlinear problem has more attention than (NLMS) scheme using the gradient-based criterion to
linear operating system [2]. improve performance of adaptive filtering, while the
Spline adaptive filtering (SAF) based on least authors in [10] induced the potential performance in
mean square (LMS) algorithm is a class of nonlinear the case of infinite impulse response.
adaptive filtering introduced in [2-4] with the low Against the impulsive noise, a set-membership
computation complexity and modelled in non-linear scheme with the normalized version of least M-
identification systems [5]. SAF is fabricated by estimate algorithm has been developed in [11].
adaptive linear finite impulse response (FIR) filtering Simulation results depict that it can attain the
followed by an adaptive lookup table (LUT). achievable convergence rate. In [12], a sign
Nonlinearity SAF structure using lookup table normalized Wiener SAF is proposed in order to
adjustment with the control points has been proposed enhance the convergence by minimizing the absolute
in [3]. Sandwich SAF model in forms of cascade value of a posteriori error.
SAF architecture consists of a class of nonlinear
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 268

Figure.1 Linear-nonlinear network of AAS-NLMS-SAF structure.

To establish the well tracking and fast


convergence, the adaptive step-size mechanism 2. Spline adaptive filtering
based on LMS algorithm is a well-known approach
The structure of spline adaptive filter (SAF),
with effective solution for achieving the convergence
namely linear-nonlinear network, shows in Fig. 1.
in linear adaptive filtering [13], [14]. In [15], an idea This network consists of linear and nonlinear part
of time averaging applied on adaptive step-size which is the linear p a r t used the a d a p t i v e finite
algorithm for beam forming has been modified with
impulse response (FIR) filter and nonlinear part is
the low computation. In [16], a low complexity step-
an adaptive lookup table (LUT) with the spline
size method by utilizing an approximate of
interpolation network [2].
autocorrelation error between present and previous Consider a desired signal 𝑑𝑛 as
estimate error is rearranged adaptively.
In this paper, we introduce a low complexity 𝑑𝑛 = 𝑦𝑛 + ℯ𝑛 (1)
adaptive step-size approach based on normalized
LMS algorithm in the SAF structure to achieve the where 𝑦𝑛 is the spline adaptive filtering (SAF)
fast convergence. In especial, we focus on the output and ℯ𝑛 is the system error.
convergence analysis and mean square error The output of adaptive FIR filter 𝒔𝑛 can be
performance of proposed algorithm. defined as
This paper is arranged in this following. Section
II describes briefly about SAF based on LMS. 𝒔𝑛 = 𝒘𝑇𝑛 𝐱𝑛 , (2)
Section III proposes an adaptive averaging step-size
algorithm for both the weight vectors of adaptive where 𝐰𝑛 is the adaptive tap-weight vector and 𝐱𝑛
linear FIR filtering and interpolating control points is the input vector as
of adaptive LUT by the minimizing cost function.
Section IV shows the convergence and stability 𝐰𝑛 = [ w0 w1 … w𝑁−1 ] ,
analysis of proposed algorithm. Experiment results 𝐱𝑛 = [ 𝑥𝑛 𝑥𝑛−1 … 𝑥𝑛−𝑁+1 ] ,
and conclusion is in Section V and VI, respectively.
Notations are used through this paper. Following [17], the output of SAF is defined as
Operator (.)T is the operation of transposition.
Matrices and vectors are in bold uppercase and 𝑦𝑛 = 𝘂𝑇𝑛 𝐂𝐪𝑖,𝑛 , (3)
lowercase, respectively.
𝘂𝑛 = [𝑢𝑛3 , 𝑢𝑛2 , 𝑢𝑛 , 1] 𝑇 (4)

where 𝐪𝑖,𝑛 is the control points vector as

International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 269

𝐪𝑖,𝑛 = [𝑞𝑖,𝑛 𝑞𝑖+1,𝑛 𝑞𝑖+2,𝑛 𝑞𝑖+3,𝑛 ]𝑇 . According to Eqs. (10) and (12), the tap-weight
LMS 𝐰𝑛 and 𝐪𝑖,𝑛 vectors in the recursion form can
Local parameter 𝑢𝑛 and index 𝑖 are defined as [18] be represented as [2]

𝑠 𝑠 𝑠 𝑄−1 𝑇
𝑢𝑛 = ∆𝑥𝑛 − [∆𝑥𝑛 ] , 𝑖 = [∆𝑥𝑛 ] + . (5) 𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛 , (13)
2

where 𝛥𝑥 is the uniform space between two- 𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 𝐂 𝑇 𝐮𝑛 𝑒𝑛 , (14)


adjacent control points. 𝑄 is the number of control
point, and ⌊∙⌋ is floor operator. The parameter 𝑠𝑛 is where 𝜇𝑤 and 𝜇𝑞 are the fixed step-size parameters
concerned with a nonlinear activation function using for tap-weight 𝐰𝑛 and for the control points 𝐪𝑖,𝑛 ,
the span index 𝑖 and the local parameter 𝑢, where which incorporate with the other constant.
𝑢 ∈ [0, 1]. Spline basis matrix C is described in [2].
3. Proposed adaptive averaging step-size
By minimizing cost function in the least mean normalized least mean square algorithm
square algorithm (LMS), we have [2] for spline adaptive filtering
1 Following [11], the minimized cost function of
𝐽( 𝐰𝑛 , 𝐪𝑖,𝑛 ) = 2 min{ |𝑒𝑛2 | }, (6)
𝑤𝑛 normalized least mean square algorithm for SAF is
expressed as
where 𝑒𝑛 is 𝑎 priori estimation error 𝑒𝑛 that arises
1
from the model as 𝐽̃( 𝐰𝑛 , 𝐪𝑖,𝑛 ) = min{ (𝐮𝑇𝑛 𝐮𝑛 )−1 |𝑒𝑛2 | } (15)
2
𝑤𝑛
𝑒𝑛 = 𝑑𝑛 − 𝑦𝑛 = 𝑑𝑛 − 𝐮𝑇𝑛 𝐂𝐪𝑖,𝑛 . (7)
where 𝑒𝑛 is defined in (7).
Hence, the adaptive tap-weight 𝐰𝑛 and 𝐪𝑖,𝑛 vectors And the update tap-weight estimated vector 𝐰𝑛
take the specific form as at symbol n can be expressed by

𝜕𝐽(𝐰𝑛 ,𝐪𝑖,𝑛 ) 𝜕𝐽̃(𝐰𝑛 ,𝐪𝑖,𝑛 )


𝐰𝑛+1 = 𝐰𝑛 − 𝜇𝑤 , (8) 𝐰𝑛+1 = 𝐰𝑛 − 𝜇𝑤𝑛 𝜕𝐰𝑛
. (16)
𝜕𝐰𝑛

𝜕𝐽(𝐰𝑛 ,𝐪𝑖,𝑛 ) By differentiating the cost function in Eq. (15) w.r.t


𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 − μ𝑞 , (9)
𝜕𝐪𝑖,𝑛 𝒘𝑛 with the chain rule, that is

where 𝜇𝑤 and 𝜇𝑞 are the step-size parameters. 𝜕𝐽̃(𝐰𝑛 , 𝐪1,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛
Then, the gradient of the cost function in Eq. (6) is = (𝐮𝑇𝑛 𝐮𝑛 )−1 {−𝑒𝑛 }
𝜕𝐰𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐰𝑛
necessarily evaluated with respect to (w.r.t) the −𝑒
= (𝐮𝑇𝑛 𝐮𝑛 )−1 { ∆𝓍𝑛 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 } . (17)
adaptive tap-weight 𝐰𝑛 and 𝐪𝑖,𝑛 vectors using the
chain rule as
Finally, we introduce the proposed tap-weight
𝜕𝐽(𝐰𝑛 , 𝐪𝑖,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 estimated vector 𝒘𝒏 of adaptive FIR filter based on
= −𝑒𝑛 normalized least mean square algorithm obtained by
𝜕𝐰𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐰𝑛
−𝑒𝑛 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛
= 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 , (10) ∴ 𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤𝑛 ∆𝓍 𝐮𝑇
, (18)
∆𝓍 𝑛 𝐮𝑛

where the derivative of 𝐮𝒏 is given as where 𝜇𝑤𝑛 is the adaptive step-size parameter for
learning rate of linear part of SAF structure.
𝐮′𝑛 = [3𝑢𝑛 2 , 2𝑢𝑛 , 1, 0] , (11) Similarly, the update estimated control points
vector 𝐪𝒊,𝒏 at symbol n can be obtained by
and
𝜕𝐽̃(𝐰𝑛 ,𝐪𝑖,𝑛 )
𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 − μ𝑞 𝜕𝐪𝑖,𝑛
. (19)
𝜕𝐽(𝐰𝑛 , 𝐪𝑖,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝑖,𝑛
= −𝑒𝑛
𝜕𝐪𝑖,𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐪𝑖,𝑛
= −𝑒𝑛 𝐂 𝑇 𝐮𝑛 . (12)
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 270

Hence, the gradient of the cost function in Eq. (15) small error will yield misadjustment with the
w.r.t 𝐪𝒊,𝒏 using the chain rule is defined by decreased step-size value. Therefore, the step-size
parameter 𝜇𝑞𝑛 of control points vector 𝐪𝑖,𝑛 is
𝜕𝐽̃(𝐰𝑛 , 𝐪𝑖,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛
= (𝐮𝑇𝑛 𝐮𝑛 )−1 {−𝑒𝑛 } 𝜇𝑞𝑛 = 𝛼𝑞 ∙ 𝜇𝑞,𝑛−1 + 𝛽𝑞 ∙ |𝑒𝑛 |2 , (24)
𝜕𝐪𝑖,𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐪𝑖,𝑛

= (𝐮𝑇𝑛 𝐮𝑛 )−1 {−𝑒𝑛 𝐂𝑇 𝐮𝑛 } . (20) where 0 < 𝛼𝑞 < 1, 𝛽𝑞 > 0 and a priori estimate
error 𝑒𝑛 is given in Eq. (7).
Therefore, we present the control points vector 𝐪𝑖,𝑛 Summary of proposed adaptive averaging step-
based on normalized least mean square algorithm of size mechanism based on the normalized version of
nonlinear network in the adaptive lookup table as least mean square algorithm for spline adaptive filter
(AAS-NLMS-SAF) is shown in Table 1.
𝐂𝑇 𝐮𝑛 𝑒𝑛
∴ 𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 𝑇 . (21) 4. Convergence and stability analysis
𝑛 𝐮𝑛 𝐮𝑛

where 𝜇𝑞𝑛 is the adaptive step-size parameter for In order to achieve optimal performance, we
nonlinear part of SAF structure. determine an adaptive leaning rate that minimizes
the instantaneous output error of filter by
3.1 Adaptive averaging step-size algorithm for performing Taylor series expansion of error 𝑒𝑛 . The
spline adaptive filtering approach intends to the optimal learning rate to
ensure the convergence at the steady-state.
The main objective of adaptive averaging step-
size mechanism is to improve as follows. Following 4.1 Convergence analysis of proposed algorithm
[17], if the estimate error is far off the optimal value,
the step-size parameter will be increased. Convergence properties of adaptive tap-weight
Meanwhile, the estimate error is near the optimum, 𝐰𝑛 vector can be determined by using Taylor series
the step-size parameter will be decreased expansion of estimate error 𝑒𝑛 as [2]
automatically.
𝜕𝑒𝑛
The proposed idea is to average step-size 𝑒𝑛+1 ≃ 𝑒𝑛 + ∙ ∆𝐰𝑛 , (25)
𝜕 𝐰𝑛
parameter with autocorrelation of previous and
present estimate error of network system for update where an estimate error 𝑒𝑛 is given as
𝜇𝑤𝑛 and 𝜇𝑞𝑛 adaptively.
Therefore, we modify the adaptive averaging 𝑒𝑛 = 𝑑𝑛 − 𝐮𝑇𝑛 ∙ 𝐂 ∙ 𝐪𝑖,𝑛 . (26)
step-size 𝜇𝑤𝑛 of tap-weight 𝐰𝑛 vector concerning
with the estimation of an averaging of Differentiating 𝑒𝑛 in Eq. (26) w.r.t 𝐰𝑛 with the

autocorrelation {𝑒𝑛−1 𝑒𝑛 } as chain rule, we get

𝜇𝑤𝑛 = 𝛼𝑤 ∙ 𝜇𝑤𝑛−1 + 𝛽𝑤 ∙ |𝜉𝑛 |2 , (22) 𝜕𝑒𝑛 −𝐮′𝑛 ∙𝐂∙𝐪𝒊,𝒏 ∙𝐱𝒏


𝜕𝐰𝒏
= ∆𝑥∙(𝐮𝑇
. (27)
𝑛 ∙𝐮𝑛 )

𝜉𝑛 = 𝛾 ∙ 𝜉𝑛−1 + (1 − 𝛾){𝑒𝑛−1 𝑒𝑛 } , (23)
where 𝐮′𝑛 is given in Eq. (11).
where 𝛽𝑤 is a scaled variable for prediction error, 𝛾 From Eq. (18), we have the change of 𝐰𝑛 as
is close to 1 and 0 < 𝛼𝑤 < 1.
𝜇𝑤𝑛 ∙𝐮′𝑛 ∙𝐂∙𝐪𝒊,𝒏 ∙𝐱𝒏 ∙𝒆𝒏
We note that there are two reasons related with ∆𝑤𝑛 = 𝐰𝑛+1 − 𝐰𝑛 = . (28)
∆𝑥∙(𝐮𝑇
𝑛 ∙𝐮𝑛 )
𝜉𝑛 are as follows. First, the autocorrelation of error
is generally measured for optimal performance.
Second, the uncorrelated noise sequence is rejected By substituting (27) and (28) into (25), we arrive at
on the update step-size mechanism. 𝜇𝑤𝑛 ∅𝑛 ∙ 𝐱𝑛 ∅ ∙𝐱 ∙𝑒
𝑒𝑛+1 = 𝑒𝑛 − ( ) ( 𝑛 𝑇𝑛 𝑛 ) . (29)
∆𝑥 𝐮𝑇 ∙𝐮
𝑛 𝑛 ∆𝑥∙𝐮 𝑛 ∙𝐮𝑛
3.2 Modified adaptive step-size algorithm

Following [19], the learning rate of step-size is where ∅𝑛 is given by


controlled by squared estimate error. If an error is
large, the step-size parameter will increase. While a ∅𝑛 = 𝐮′𝑛 ∙ 𝐂 ∙ 𝐪𝑖,𝑛 . (30)

International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 271

Table 1. Proposed spline adaptive filtering based on the 𝜇 ∅2𝑛 ∙‖𝐱𝑛 ‖2


|𝑒𝑛+1 | = |1 − 𝑤𝑛2 ( )| ∙ |𝑒𝑛 | . (32)
adaptive averaging step-size normalized least mean (∆𝑥) 𝐮𝑇 𝑛 ∙𝐮𝑛
square algorithm (AAS-NLMS-SAF)
Therefore, the proposed step-size 𝜇𝑤𝑛 of tap-weight
Initialize : 𝐰(0) = 𝜑𝑤 . [1 0 … 0]𝑇 , 𝐪(0) = [1 0 … 0]𝑇 , vector 𝐰𝑛 in the adaptive FIR filter reaches

−1 3 −3 1 2 (𝐮𝑇
2
𝑛 ∙𝐮𝑛 ) ∙(∆𝑥)
2
1 2 −54 −1 ∴ 𝜇𝑤𝑛 ≃ 2
∅𝑛 ∙‖𝐱𝑛 ‖ 2 , (33)
𝐂= [
2 −1 0
]
1 0
0 2 0 0 where we assume that |𝑒𝑛+1 | < |𝑒𝑛 | .
Similarly, we determine a bound on 𝜇𝑞𝑛 with the
for 𝑛 = 0, 1, 2, … , 𝑁 − 1.
Taylor series expansion of estimate error 𝑒𝑛 as
1) Calculate the output of adaptive FIR filter 𝑠𝑛
𝜕𝑒𝑛
𝑒𝑛+1 = 𝑒𝑛 + ∙ ∆𝐪𝑖,𝑛 , (34)
𝜕𝐪𝑖,𝑛
𝒔𝑛 = 𝒘𝑇𝑛 𝐱𝑛 ,
2) Compute the local parameter 𝑢𝑛 and index 𝑖 as where the derivative of 𝑒𝑛 w.r.t 𝐪𝑖,𝑛 is given by
𝑠𝑛 𝑠𝑛
𝑢𝑛 = −[ ] 𝜕𝑒𝑛 −𝐜 𝑇 ∙𝐮𝑛
∆𝑥 ∆𝑥 𝜕𝐪𝑖,𝑛
= 𝐮𝑇
, (35)
𝑛 ∙𝐮𝑛
𝑠 𝑄−1
𝑖= [∆𝑥𝑛 ] + 2 .
And From (21), we have the change of 𝐪𝑖,𝑛 as
3) Calculate the error 𝑒𝑛 as
𝐂𝑇 𝑢𝑛 𝑒𝑛
∆𝐪𝑖,𝑛 = 𝜇𝑞𝑛 ∙ 𝑢𝑛𝑇𝑢 . (36)
𝑒𝑛 = 𝑑𝑛 − 𝐮𝑇𝑛 𝐂𝐪𝑖,𝑛 𝑛

4) Compute the adaptive averaging step-size 𝜇𝑤𝑛 of 𝐰𝑛 Hence, we substitute Eqs. (35) and (36) into Eq. (34),
we have
𝜇𝑤𝑛 = 𝛼𝑤 ∙ 𝜇𝑤𝑛−1 + 𝛽𝑤 ∙ |𝜉𝑛 |2 ,
𝐂𝑇 ∙𝐮𝑛 𝐂𝑇 𝐮𝑛
∗ ∴ 𝑒𝑛+1 = [1 − (𝜇𝑞𝑛 ∙ 𝑇 ) ∙ ( 𝐮𝑇
)] 𝑒𝑛 . (37)
𝜉𝑛 = 𝛾 ∙ 𝜉𝑛−1 + (1 − 𝛾){𝑒𝑛−1 𝑒𝑛 } , 𝐮𝑛 𝐮𝑛 𝑛 𝐮𝑛

5) Calculate the mo d ified step-size 𝜇𝑞𝑖,𝑛 of 𝑞𝑖,𝑛 Taking the norm of both sides in Eq. (37), we get

𝜇𝑞𝑛 = 𝛼𝑞 ∙ 𝜇𝑞𝑛−1 + 𝛽𝑞 ∙ |𝑒𝑛 |2 , 𝐂𝑇 ∙𝐮𝑛 𝐂 𝑇 𝐮𝑛


|𝑒𝑛+1 | = |1 − (𝜇𝑞𝑛 ∙ 𝑇 ) ∙ ( )| ∙ |𝑒𝑛 |. (38)
𝐮𝑛 𝐮𝑛 𝐮𝑇𝑛 𝐮𝑛
6) Determine the tap-weight vector 𝐰𝑛 and the control
points vector 𝐪𝑖,𝑛 as Therefore, the adaptive learning rate 𝜇𝑞𝑛 becomes

𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛 𝐮𝑇
𝑛 ∙𝐮𝑛
𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤𝑛 , ∴ 𝜇𝑞𝑛 ≅ 2 . (39)
∆𝓍 𝐮𝑇𝑛 𝐮𝑛 𝐂𝑇 𝐂

𝐂𝑇 𝐮𝑛 𝑒𝑛
4.2 Mean square error performance of proposed
𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 . algorithm
𝑛 𝐮𝑇
𝑛 𝐮𝑛

In this section, we consider the mean square


end
error performance at steady-state in the derivation of
excess mean square error (EMSE) of nonlinear
adaptive FIR filter and the control points vector in
Therefore, the estimate error can be rewritten as the adaptive LUT.
Following [6], we determine the 𝜀𝑛 is a priori
𝑤𝑛𝜇 ∅2𝑛 ∙‖𝐱𝑛 ‖2 error of system, 𝜀𝑤𝑛 is a priori error concerned the
∴ 𝑒𝑛+1 = [1 − (∆𝑥)2( 2 )] 𝑒𝑛 . (31)
(𝐮𝑇
𝑛 ∙𝐮𝑛 ) tap-weight vector 𝐰𝑛 and 𝜀𝑞𝑛 is a priori error
involved the control points vector 𝐪𝑖,𝑛 .
Taking the norm of both sides in (31), we have To encourage the analysis, the proposed
adaptive averaging step – size normalized least
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 272

mean square (AAS - NLMS) algorithm is under a Taking the expectation onto the noise in Eqs. (44)
few assumptions. and (45) with the condition at steady-state for
𝑛 closes to infinity, we get
Assumption 1: We consider that the noise sequence
of system 𝜼𝑛 is independent and identically 2
𝐸{ℰ𝑤𝑛 ∙ 𝑒𝑛 } = 𝐸{ℰ𝑤𝑛 (ℰ𝑤𝑛 + 𝜂𝑤𝑛 )} ≃ 𝐸{ℰ𝑤 𝑛
} (46)
distributed with variance of noise 𝛿 2 and zero mean.
and
Assumption 2: We consider that the noise sequence
of system 𝜼𝑛 is independent of 𝐱𝑛 , 𝐬𝑛 , 𝜀𝑛 , 𝜀𝑤𝑛 , 𝐸{𝑒𝑛2 } = 𝐸{(ℰ𝑤𝑛 + 𝜂𝑤𝑛 )2 }
and 𝜀𝑞𝑛 . 2
= 𝐸{ℰ𝑤 2
+ 2ℰ𝑤𝑛 𝜂𝑤𝑛 + 𝜂𝑤 }
𝑛
2 2
≃ 𝐸{ℰ𝑤𝑛 + 𝜉𝑤𝑛 } . (47)
Let us assume the estimate weight noise vector
𝜼𝑤𝑛 concerned with the tap – weight vector 𝐰𝑛 as 2
where 𝜉𝑤 𝑛
is the minimum MSE involved with 𝐰𝑛 .
Substituting Eqs. (46) and (47) into Eq. (43), we
𝜼𝑤𝑛 = 𝐰0 − 𝐰𝑛 . (40)
have
where 𝜼𝑤𝑛 = [ 𝜂𝑤0 𝜂𝑤1 … 𝜂𝑤𝑁−1 ]. 2 𝜇𝑤𝑛 ∅𝑛 ∙‖𝐱𝒏 ‖2 2 2
From Eq. (18), we can write the update weight 2𝐸{ℰ𝑤 𝑛
}= ∆𝓍
∙ (𝐮𝑇
∙ E{ℰ𝑤 𝑛
+ 𝜉𝑤 𝑛
}
𝑛 ∙𝐮𝑛 )
noise vector 𝜼𝑤𝑛+1 as
𝜇𝑤𝑛 ∅𝑛 ∙‖𝑥𝑛 ‖2 𝜇𝑤𝑛 ∅𝑛 ∙‖𝐱𝒏 ‖2 2
[2 − ∆𝓍
∙ 𝐮𝑇
] {𝐸𝑤𝑛2 } = ∙ (𝐮𝑇
∙ E{ 𝜉𝑤 𝑛
}
𝜼𝑤𝑛+1 = 𝜼𝑤𝑛 − (𝐰𝑛+1 − 𝐰𝑛 ) 𝑛 ∙𝐮𝑛 ∆𝓍 𝑛 ∙𝐮𝑛 )

𝜇𝑤 𝑛 ∙ ∅𝑛 ∙𝐱𝑛 ∙𝑒𝑛
𝜼𝑤𝑛+1 = 𝜼𝑤𝑛 − . (41) 2 𝜇𝑤𝑛 ∅𝑛 ‖𝐱𝑛 ‖2 𝐸{𝜉𝑤
2 }
∆𝓍(𝐮𝑇
𝑛 ∙𝐮𝑛 ) 𝐸{ℰ𝑤 }= 𝑛
(48)
𝑛 2∆𝓍(𝐮𝑇
𝑛 ∙𝐮𝑛 )− 𝜇𝑤𝑛 ∙∅𝑛 ∙‖𝐱 𝑛 ‖
2

where ∅𝑛 is given in Eq. (30).


To evaluate the square of update weight noise If 𝜇𝑤𝑛 is very small, we have
2
vector ‖𝜼𝑤𝑛 ‖ of Eq. (41), we obtain 𝜇𝑤𝑛 ∅𝑛 ‖𝐱𝒏 ‖𝟐 ∙𝐸{𝜉𝑤
2 }
2 𝑛
∴ 𝜁𝑤 = E{ℰ𝑤 𝑛
}≅ 2∆𝓍(𝐮𝑻
, (49)
𝒏 ∙𝐮𝒏 )
2 2 𝜇𝑤𝑛 ∅𝑛 ∙𝐱𝑛 ∙𝑒𝑛
‖𝜼𝑤𝑛+1 ‖ = ‖𝜼𝑤𝑛 ‖ − 𝜼𝑤𝑛 ∙ ∆𝓍 ∙ (𝐮𝑇 𝐮 )
𝑛 𝑛
𝜇𝑤𝑛 ∅𝑛 ∙‖𝐱𝑛 ‖ ∙𝑒𝑛2
2 2 2 where 𝜁𝑤 is the excess MSE concerned with 𝐰𝑛 .
+∙ ∙ . (42) In a similar manner, we assume that the noise
(∆𝓍)2 (𝐮𝑇
𝑛 𝐮𝑛 )
2
sequence of estimated weight noise vector 𝜼𝑞𝑛
Assumption 3: We consider the condition necessary involved with the control points vector 𝐪𝑖,𝑛 as
for the convergence of mean, that is
𝜼𝑞𝑛 = 𝐪0 − 𝐪𝑖,𝑛 . (50)
𝟐 𝟐
𝐄 {‖𝜼𝑤𝑛+1 ‖ } = 𝐄 {‖𝜼𝑤𝑛 ‖ } , as 𝑛 → ∞ .
where 𝜼𝑞𝑛 = [ 𝜂𝑞0 𝜂𝑞1 … 𝜂𝑞𝑁−1 ].
From Assumption (3), the update 𝜼𝑤𝑛 in (42) can be From Eq. (21), the update weight noise vector
rewritten as 𝜼𝑞𝑛 can be expressed as

𝜇𝑤𝑛 𝜇2 ∅𝟐𝒏 ∙‖𝐱𝒏 ‖𝟐 𝑒𝑛2 𝐮𝑇 ∙𝐂∙𝒆𝒏


2𝜼𝑤𝑛 ∙ ∙
∅𝒏 ∙𝐱𝒏 ∙𝑒𝑛 𝑤𝑛
= (∆𝔁) 𝜼𝑞𝑛+1 = 𝜼𝑞𝑛 − 𝜇𝑞𝑛 (𝐮𝑛𝑇 𝐮 . (51)
∆𝔁 (𝐮𝑻 𝟐 ∙ (𝐮𝑻 𝟐 𝑛 𝑛)
𝒏 ∙𝐮𝒏 ) 𝒏 ∙𝐮𝒏 )
𝜇𝑤𝑛 ∅𝒏 ∙‖𝐱𝒏 ‖𝟐 ∙𝑒𝑛2
2𝜀𝑤𝑛 ∙ 𝑒𝑛 = ∙ , (43) 2
∆𝔁 (𝐮𝑻 𝒏 ∙𝐮𝒏 ) Then, we evaluate the square of noise vector ‖𝜼𝑞𝑛 ‖
using Eq. (51), that is
where ℰ𝑤𝑛 is given by
2
𝐮𝑇 ∙𝐂∙𝑒 𝜇𝑞2𝑛 ‖𝐮𝑇 2
𝑛 ∙𝐂‖ 𝑒𝑛
ℰ𝑤𝑛 = 𝜼𝑤𝑛 𝐱𝑛 . (44) ‖𝜼2𝑞𝑛+1 ‖ = ‖𝜼2𝑞𝑛 ‖ − 2𝜼𝑞𝑛 𝜇𝑞𝑛 (𝐮𝑛𝑇 .𝐮 𝑛) + (𝐮𝑇 2
𝑛 𝑛 𝑛 ∙𝐮𝒏 )
(52)
To redefine the a priori error of system 𝑒𝑛 as
Assumption 4: We regard that
𝑒𝑛 = ℰ𝑤𝑛 + 𝜂𝑤𝑛 . (45) 2 2
𝐸 {‖𝜼𝑞𝑛+1 ‖ } = 𝐸 {‖𝜼𝑞𝑛 ‖ } ; as 𝑛 → ∞
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 273

From Assumption (4), the update 𝜼𝑞𝑛 in Eq. (52) ‖𝐮𝑻 ∙ 𝐂‖𝟐 2
2E{ℰ𝑞2𝑛 } = 𝜇𝒒𝒏 ∙ ∙ E {ℰ𝑞2𝑛 + 𝜉𝑞 }
can be calculated as (𝐮𝑇𝑛 ∙ 𝐮𝑛 ) 𝑛

2 𝟐 𝟐 2
𝛍𝑇 𝜇𝑞2𝑛 ∙‖𝐮𝑇 ∙𝐂‖ ∙𝑒 2𝑛 𝜇𝑞𝑛 ∙‖𝐮𝑻 ∙𝐂‖ 𝜇𝑞𝑛 ∙‖𝐮𝑻 ∙𝐂‖ ∙𝐄{𝜉𝑞 }
𝑛 ∙𝐶𝑛 ∙𝑒𝑛
2𝜼𝑞𝑛 ∙ 𝜇𝑞𝑛 ∙ = 2 E{ℰ𝑞2𝑛 } [2 − ]= 𝑛
. (58)
(𝐮𝑇𝑛 ∙𝐮𝑛 ) (𝐮𝑇 (𝐮𝑇
𝑛 ∙𝐮𝑛 ) (𝐮𝑇
𝑛 ∙𝐮𝑛 )
𝑛 ∙𝐮𝑛 )
2
‖𝐮𝑇 ∙𝐂‖ ∙𝑒𝑛2
2ℰ𝑞𝑛 ∙ 𝑒𝑛 = 𝜇𝑞𝑛 ∙ , (53) If 𝜇𝑞𝑛 is very small, we get
(𝐮𝑇
𝑛 ∙𝐮𝑛 )

𝟐
where ℰ𝑞𝑛 is given as 𝜇𝑞𝑛 ∙‖𝐮𝑻 ∙𝐂‖ ∙𝐄{𝜉𝑞2𝑛 }
∴ 𝜁𝑞 ≃ E{ℰ𝑞2𝑛 } = 2(𝐮𝑇
, (59)
𝑛 ∙𝐮𝑛 )
ℰ𝑞𝑛 = 𝜼𝑞𝑛 ∙ 𝐮𝑇𝑛 ∙ 𝐂 . (54)
where 𝜁𝑞 is the excess MSE concerned with 𝐪𝑖,𝑛 .
So, we determine that the a priori error 𝑒𝑛 is
involved with 𝐪𝑖,𝑛 as 5. Experimental results
In this section, we provide the experimental tests
𝑒𝑛 = ℰ𝑞𝑛 + 𝜂𝑞𝑛 . (55) in system identification by simulating the random
process. The input coloured signal for all
Taking the expectation into the noise in Eq. (53) and experiments comprises 5,000 samples of the signal
(55) at steady-state for 𝑛 → ∞ , we have generated in the system identification over 100
Monte Carlo trials by following [20].
E{ℰ𝑞𝑛 ∙ 𝑒𝑛 } = E{ℰ𝑞𝑛 ∙ (ℰ𝑞𝑛 + 𝜂𝑞𝑛 )} ≃ E{ℰ𝑞2𝑛 } (56)
𝑥𝑛 = 𝛼 ∙ 𝑥𝑛−1 + √1 − 𝛼 2 ∙ 𝜓𝑛 , (60)
E{ℰ𝑛2 } 2
= {(ℰ𝑞𝑛 + 𝜂𝑞𝑛 ) } ≃ E{ℰ𝑞2𝑛 + 𝜉𝑞2𝑛 } (57)
where 𝜓𝑛 denotes as a zero mean white Gaussian
where 𝜉𝑞2𝑛 is the minimum MSE involved with 𝐪𝑖,𝑛 . noise with unitary variance and 0.1 ≤ 𝛼 < 0.99.
Replacing Eqs. (56) and (57) into Eq. (53), we get

Figure.2 Learning curves of 𝜇𝑤 (𝑛) of tap-weight w𝑛 vector of proposed AAS-NLMS-SAF algorithm with the different
𝛼 = 0.1, 0.25, 0.75 and SNR = 40dB

International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 274

Figure.3 Learning curves of 𝜇𝑞 (𝑛) of control points q𝑖,𝑛 vector of proposed AAS-NLMS-SAF algorithm the different
𝛼 = 0.1, 0.25, 0.75 and SNR = 40dB

Figure.4 Mean square error (MSE) of proposed ASS-NLMS-SAF algorithm compare with LMS-SAF [20] with the
different of initial step-size parameter using SNR = 40dB and 𝛼 = 0.10

International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 275

Figure.5 Mean square error (MSE) in dB of proposed ASS-NLMS-SAF algorithm compare with LMS-SAF [20] with the
different of initial step-size parameter using SNR = 40dB and 𝛼 = 0.95

We consider the mean square error (MSE) Table 2. Spline adaptive filter based on the least mean
computed in dB as square algorithm (LMS-SAF) [20]

2
MSE𝑛 = 10 log (𝐸 {(𝑑𝑛 − 𝐮T𝑛 𝐂𝐪𝑖,𝑛 ) }) . (61) Initialize : 𝐰(0) = 𝐪(0) = 𝜑𝑤 . [1 0 … 0]𝑇
for 𝑛 = 0, 1, 2, … , 𝑁 − 1.
A 23-point LUT 𝒒0 is implemented for a nonlinear
memoryless target function that is interpolated with 1) To determine the tap-weight vector 𝐰𝑛
a uniform third degree spline and SAF model is used
as △ 𝑥 = 0.2 [4] and C is a Catmul-Rom spline as 𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛 .
described in [2].
Initial parameters of all SAF model are as
2) To determine the tap-weight vector 𝐰𝑛 and the
follows: 𝐰 (0) = 𝜑𝑤 ∙ [1, 0, … , 0]T , where 𝜑𝑤 = control point vector 𝐪𝑖,𝑛 as
1 × 10−3 , 𝐪(0) = [1, 0, … , 0]T , SNR = 40dB,
length of filter is 5. For initial parameters for spline 𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 𝐂 𝑇 𝐮𝑛 𝑒𝑛 .
adaptive filtering based on least mean square (LMS-
SAF) algorithm [20] are as: 𝜇𝑤 = 𝜇𝑞 = 0.025,
end
0.035, 0.050. Summary of LMS-SAF is shown in
Table 2.
Other parameters for proposed AAS-NLMS- 𝜇𝑤 (0), 𝜇𝑞 (0) at SNR = 40dB with the different 𝛼 in
SAF algorithm are as: 𝜇𝑤 (0) = 𝜇𝑞 (0) = 1.5 × 10−4 , (60) generated the input coloured signal. It is seen
1.5 × 10−2 , 2.5 × 10−2 , 3.5 × 10−2 , 5.5 × 10−2 . The that both learning curves of 𝜇𝑤𝑛 and 𝜇𝑞𝑛 converge
fixed parameters are as follows: 𝛼𝑤 = 𝛼𝑞 = 0.975 , to their equilibria despite 100-fold of initial step-size
𝛽𝑤 = 2.95 × 10−3 , 𝛽𝑞 = 1.95 × 10−3 , 𝛾 = 0.97. situations at steady-state.
Learning rates of step-size parameters 𝜇𝑤𝑛 of In terms of MSE performance, simulation results
tap-weight vector and 𝜇𝑞𝑛 of control points vector of shown for the proposed experiments with the two
proposed AAS-NLMS-SAF algorithm are shown in choices of parameter 𝛼 = 0.10, 0.95 which are
Figs. 2 and 3 with the different initial parameter of presented in Fig. 4 and Fig. 5, respectively. At
steady-state, the performance of proposed AAS-
NLMS-SAF algorithm closes to the noise power. In
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 276

addition, we notice that the performance of proposed Convergence Properties”, Signal Processing,
AAS-NLMS-SAF algorithm outperforms to Vol. 100, pp. 112-123, 2014.
converge faster and robust mechanism when [6] M. Scarpiniti, D. Comminiello, R. Parisi, and A.
compared with the LMS-SAF algorithm using the Uncini, “Novel Cascade Spline Architectures
variants of fixed step-size parameter. for the Identification of Nonlinear Systems”,
IEEE Transactions on Circuits and Systems I:
6. Conclusion Regular Papers, Vol. 62, No. 7, pp. 1825-1835,
2015.
In this paper, we propose a step-size approach in
[7] S. Scardapane, M. Scarpiniti, D. Comminiellow,
term of averaging of square error for spline adaptive
and A. Uncini, “Diffusion Spline Adaptive
filtering (AAS-NLMS-SAF). We describe how to
Filtering”, In: Proc. of European Signal
derive the proposed adaptive averaging step-size
Processing Conference, pp. 1498-1502, 2016.
algorithm with the method of normalised version of
[8] S. Guan and Z. Li, “Normalised Spline
LMS algorithm on spline adaptive filtering. By
Adaptive Filtering Algorithm for Nonlinear
using an estimation of autocorrelation between
System Identification”, Neural Processing
present estimated error and a priori estimated error,
Letter, V ol. 5, pp.1-13, 2017.
the adaptive averaging step-size scheme is proposed
[9] S. Guan and Z. Li, “Normalised Spline
on SAF. The convergence and stability analysis of
Adaptive Filtering Algorithm for Nonlinear
proposed AAS-NLMS-SAF algorithm examine in
System Identification”, Neural Processing
terms of mean square error and excess mean square
Letter, Vol. 46, Issue. 2, pp. 595-607, 2017.
error concerned with adaptive tap-weight FIR vector
[10] M. Scarpiniti, D. Commiiniello, R. Parisi, and
and control points vector in the adaptive LUT.
A. Uncini, “Nonlinear System Identification
Both the trajectories of adaptive step-size
using IIR Spline Adaptive Filters”, Signal
parameters can converge into each equilibrium in
Processing, Vol. 108, pp. 30-35, 2015.
spite of 100-fold initial variations. Learning curves
[11] C. Liu and Z. Zhang, “Set-membership
of MSE performance are illustrated to converge
N ormalised Least M-estimate Spline Adaptive
dramatically to steady-state in comparison with the
Filtering Algorithm in Impulsive Noise”,
existing LMS-SAF algorithm using the fixed step-
Electronics Letters, Vol. 54, No. 6, pp. 393-
size parameters.
395, 2018.
Especially, SAF can perform well with low-cost
[12] C. Liu, Z. Zhang, and X. Tang, “Sign
complexity beside the existing FIR structures.
Normalised Spline Adaptive Filtering
Because of the recursion form, SAF can be modified
Algorithms Against Impulsive Noise”, Signal
in many practical cases such as nonlinear channel
Processing, Vol. 148, pp. 234–240, 2018.
equalization, biomedical data analysis and control
[13] H.S. Lee, S.E. Kim, W. Lee, and W.J. Song, “A
applications.
Variable Step-size Diffusion LMS algorithm
for Distributed Estimation”, IEEE Transactions
References on Signal Processing, Vol. 63, No. 7, pp. 1808–
[1] A. Uncini, “Fundamentals of Adaptive Signal 1820, 2015.
Processing”, ser. Signals and Communication [14] S. Sitjongsataporn, “Advanced Adaptive DMT
Technology, Springer International Publishing, Equalisation: Algorithms and Implementation”,
Switzerland, 2015. LAP LAMBERT Academic Publishing, 2011.
[2] M. Scarpiniti, D. Comminiello, R. Parisi, and [15] L. Wang, Y. Cai, and R.C.de Lamare, “Low-
A. Uncini, “Nonlinear Spline Adaptive Complexity Adaptive Step- Size Constrained
Filtering”, Signal Processing, Vol. 93, Issue. 4, Constant Modulus SG-based Algorithms for
pp. 772-783, 2013. Blind Adaptive Beamforming”, In: Proc. of
[3] C. Liu, Z. Zhang, and X. Tang, “Sign International Conference on Acoustics, Speech,
Normalised Spline Adaptive Filtering and Signal Processing, pp. 2593-2596, 2008.
Algorithms Against impulsive Noise”, Signal [16] S. Sitjongsataporn and P. Yuvapoositanon,
Processing, Vol. 148, Issue. 6, pp. 234-240, “Low Complexity Adaptive Step-Size Filtered
2018. Gradient-based Per-Tone DMT Equalisation”,
[4] L. Ljung, “System Identification- Theory for In: Proc. of International Symposium on
the user”, Upper S ad dl e River, NJ, 1999. Circuits and Systems, pp.2526-2529, 2010.
[5] M. Scarpiniti, D. Comminiello, R. Parisi, and [17] S. Kalluri and G.R. Arce, “General Class of
A. Uncini, “Hammerstein Uniform Cubic Nonlinear Normalized Adaptive Filtering
Spline Adaptive Filtering: Learning and Algorithms”, IEEE Transactions on Signal
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 277

Processing, Vol. 48, No. 8, pp. 2262-2272,


1999.
[18] S. Guarnieri, F. Piazza, and A. Uncini,
“Multilayer Feedforward Networks with
Adaptive Spline Activation Function”, IEEE
Transactions on Neural Network, Vol. 10, No.
3, pp. 672-683, 1999.
[19] S. Sitjongsataporn, “Analysis of Low
Complexity Adaptive Step-size Orthogonal
Gradient-based FEQ for OFDM systems”,
ECTI Transactions on Computer and
Information Technology, Vol. 5, No. 2, pp. 134-
145, 2011.
[20] M. Scarpiniti, D. Comminiello, R. Parisi, and
A. Uncini, “Spline Adaptive Filters: Theory
and Applications”, Adaptive Learning Methods
for Nonlinear System Modelling, pp. 47-69,
2018.

International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26

You might also like