Convergence and Stability Analysis of Spline Adaptive Filtering Based On Adaptive Averaging Step-Size Normalized Least Mean Square Algorithm
Convergence and Stability Analysis of Spline Adaptive Filtering Based On Adaptive Averaging Step-Size Normalized Least Mean Square Algorithm
267
1
The Electrical Engineering Graduate Program, Faculty of Engineering,
Mahanakorn University of Technology, Bangkok Thailand 10530
2
Department of Electronic Engineering, Mahanakorn Institute of Innovation, Faculty of Engineering,
Mahanakorn University of Technology, Bangkok Thailand 10530
* Corresponding author’s Email: [email protected]
Abstract: This paper presents a normalized version of least mean square algorithm with an adaptive averaging step-
size approach for spline adaptive filter. The use of an adaptive averaging step-size mechanism is to modify on
autocorrelation between previous and present estimate error of system for updating step-size parameter. For
achieving fast convergence, the proposed spline adaptive filter is combined with adaptive averaging step-size scheme
and normalized version of least mean square approach. The convergence analysis and stability properties are
accomplished. Simulation results of experiments depict that the trajectories of step-size parameters of the proposed
algorithm converge to their own equilibria in spite of large variations in initial step-size settings. Proposed algorithm
demonstrates more robust performance in mean square error and fast convergence compared with the conventional
spline adaptive filter.
Keywords: Spline adaptive filtering, Nonlinear network, Normalized least mean square algorithm.
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 269
𝐪𝑖,𝑛 = [𝑞𝑖,𝑛 𝑞𝑖+1,𝑛 𝑞𝑖+2,𝑛 𝑞𝑖+3,𝑛 ]𝑇 . According to Eqs. (10) and (12), the tap-weight
LMS 𝐰𝑛 and 𝐪𝑖,𝑛 vectors in the recursion form can
Local parameter 𝑢𝑛 and index 𝑖 are defined as [18] be represented as [2]
𝑠 𝑠 𝑠 𝑄−1 𝑇
𝑢𝑛 = ∆𝑥𝑛 − [∆𝑥𝑛 ] , 𝑖 = [∆𝑥𝑛 ] + . (5) 𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛 , (13)
2
where 𝜇𝑤 and 𝜇𝑞 are the step-size parameters. 𝜕𝐽̃(𝐰𝑛 , 𝐪1,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛
Then, the gradient of the cost function in Eq. (6) is = (𝐮𝑇𝑛 𝐮𝑛 )−1 {−𝑒𝑛 }
𝜕𝐰𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐰𝑛
necessarily evaluated with respect to (w.r.t) the −𝑒
= (𝐮𝑇𝑛 𝐮𝑛 )−1 { ∆𝓍𝑛 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 } . (17)
adaptive tap-weight 𝐰𝑛 and 𝐪𝑖,𝑛 vectors using the
chain rule as
Finally, we introduce the proposed tap-weight
𝜕𝐽(𝐰𝑛 , 𝐪𝑖,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 estimated vector 𝒘𝒏 of adaptive FIR filter based on
= −𝑒𝑛 normalized least mean square algorithm obtained by
𝜕𝐰𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐰𝑛
−𝑒𝑛 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛
= 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 , (10) ∴ 𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤𝑛 ∆𝓍 𝐮𝑇
, (18)
∆𝓍 𝑛 𝐮𝑛
where the derivative of 𝐮𝒏 is given as where 𝜇𝑤𝑛 is the adaptive step-size parameter for
learning rate of linear part of SAF structure.
𝐮′𝑛 = [3𝑢𝑛 2 , 2𝑢𝑛 , 1, 0] , (11) Similarly, the update estimated control points
vector 𝐪𝒊,𝒏 at symbol n can be obtained by
and
𝜕𝐽̃(𝐰𝑛 ,𝐪𝑖,𝑛 )
𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 − μ𝑞 𝜕𝐪𝑖,𝑛
. (19)
𝜕𝐽(𝐰𝑛 , 𝐪𝑖,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝑖,𝑛
= −𝑒𝑛
𝜕𝐪𝑖,𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐪𝑖,𝑛
= −𝑒𝑛 𝐂 𝑇 𝐮𝑛 . (12)
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 270
Hence, the gradient of the cost function in Eq. (15) small error will yield misadjustment with the
w.r.t 𝐪𝒊,𝒏 using the chain rule is defined by decreased step-size value. Therefore, the step-size
parameter 𝜇𝑞𝑛 of control points vector 𝐪𝑖,𝑛 is
𝜕𝐽̃(𝐰𝑛 , 𝐪𝑖,𝑛 ) 𝜕𝑦𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛
= (𝐮𝑇𝑛 𝐮𝑛 )−1 {−𝑒𝑛 } 𝜇𝑞𝑛 = 𝛼𝑞 ∙ 𝜇𝑞,𝑛−1 + 𝛽𝑞 ∙ |𝑒𝑛 |2 , (24)
𝜕𝐪𝑖,𝑛 𝜕𝐮𝑛 𝜕𝐬𝑛 𝜕𝐪𝑖,𝑛
= (𝐮𝑇𝑛 𝐮𝑛 )−1 {−𝑒𝑛 𝐂𝑇 𝐮𝑛 } . (20) where 0 < 𝛼𝑞 < 1, 𝛽𝑞 > 0 and a priori estimate
error 𝑒𝑛 is given in Eq. (7).
Therefore, we present the control points vector 𝐪𝑖,𝑛 Summary of proposed adaptive averaging step-
based on normalized least mean square algorithm of size mechanism based on the normalized version of
nonlinear network in the adaptive lookup table as least mean square algorithm for spline adaptive filter
(AAS-NLMS-SAF) is shown in Table 1.
𝐂𝑇 𝐮𝑛 𝑒𝑛
∴ 𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 𝑇 . (21) 4. Convergence and stability analysis
𝑛 𝐮𝑛 𝐮𝑛
where 𝜇𝑞𝑛 is the adaptive step-size parameter for In order to achieve optimal performance, we
nonlinear part of SAF structure. determine an adaptive leaning rate that minimizes
the instantaneous output error of filter by
3.1 Adaptive averaging step-size algorithm for performing Taylor series expansion of error 𝑒𝑛 . The
spline adaptive filtering approach intends to the optimal learning rate to
ensure the convergence at the steady-state.
The main objective of adaptive averaging step-
size mechanism is to improve as follows. Following 4.1 Convergence analysis of proposed algorithm
[17], if the estimate error is far off the optimal value,
the step-size parameter will be increased. Convergence properties of adaptive tap-weight
Meanwhile, the estimate error is near the optimum, 𝐰𝑛 vector can be determined by using Taylor series
the step-size parameter will be decreased expansion of estimate error 𝑒𝑛 as [2]
automatically.
𝜕𝑒𝑛
The proposed idea is to average step-size 𝑒𝑛+1 ≃ 𝑒𝑛 + ∙ ∆𝐰𝑛 , (25)
𝜕 𝐰𝑛
parameter with autocorrelation of previous and
present estimate error of network system for update where an estimate error 𝑒𝑛 is given as
𝜇𝑤𝑛 and 𝜇𝑞𝑛 adaptively.
Therefore, we modify the adaptive averaging 𝑒𝑛 = 𝑑𝑛 − 𝐮𝑇𝑛 ∙ 𝐂 ∙ 𝐪𝑖,𝑛 . (26)
step-size 𝜇𝑤𝑛 of tap-weight 𝐰𝑛 vector concerning
with the estimation of an averaging of Differentiating 𝑒𝑛 in Eq. (26) w.r.t 𝐰𝑛 with the
∗
autocorrelation {𝑒𝑛−1 𝑒𝑛 } as chain rule, we get
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 271
−1 3 −3 1 2 (𝐮𝑇
2
𝑛 ∙𝐮𝑛 ) ∙(∆𝑥)
2
1 2 −54 −1 ∴ 𝜇𝑤𝑛 ≃ 2
∅𝑛 ∙‖𝐱𝑛 ‖ 2 , (33)
𝐂= [
2 −1 0
]
1 0
0 2 0 0 where we assume that |𝑒𝑛+1 | < |𝑒𝑛 | .
Similarly, we determine a bound on 𝜇𝑞𝑛 with the
for 𝑛 = 0, 1, 2, … , 𝑁 − 1.
Taylor series expansion of estimate error 𝑒𝑛 as
1) Calculate the output of adaptive FIR filter 𝑠𝑛
𝜕𝑒𝑛
𝑒𝑛+1 = 𝑒𝑛 + ∙ ∆𝐪𝑖,𝑛 , (34)
𝜕𝐪𝑖,𝑛
𝒔𝑛 = 𝒘𝑇𝑛 𝐱𝑛 ,
2) Compute the local parameter 𝑢𝑛 and index 𝑖 as where the derivative of 𝑒𝑛 w.r.t 𝐪𝑖,𝑛 is given by
𝑠𝑛 𝑠𝑛
𝑢𝑛 = −[ ] 𝜕𝑒𝑛 −𝐜 𝑇 ∙𝐮𝑛
∆𝑥 ∆𝑥 𝜕𝐪𝑖,𝑛
= 𝐮𝑇
, (35)
𝑛 ∙𝐮𝑛
𝑠 𝑄−1
𝑖= [∆𝑥𝑛 ] + 2 .
And From (21), we have the change of 𝐪𝑖,𝑛 as
3) Calculate the error 𝑒𝑛 as
𝐂𝑇 𝑢𝑛 𝑒𝑛
∆𝐪𝑖,𝑛 = 𝜇𝑞𝑛 ∙ 𝑢𝑛𝑇𝑢 . (36)
𝑒𝑛 = 𝑑𝑛 − 𝐮𝑇𝑛 𝐂𝐪𝑖,𝑛 𝑛
4) Compute the adaptive averaging step-size 𝜇𝑤𝑛 of 𝐰𝑛 Hence, we substitute Eqs. (35) and (36) into Eq. (34),
we have
𝜇𝑤𝑛 = 𝛼𝑤 ∙ 𝜇𝑤𝑛−1 + 𝛽𝑤 ∙ |𝜉𝑛 |2 ,
𝐂𝑇 ∙𝐮𝑛 𝐂𝑇 𝐮𝑛
∗ ∴ 𝑒𝑛+1 = [1 − (𝜇𝑞𝑛 ∙ 𝑇 ) ∙ ( 𝐮𝑇
)] 𝑒𝑛 . (37)
𝜉𝑛 = 𝛾 ∙ 𝜉𝑛−1 + (1 − 𝛾){𝑒𝑛−1 𝑒𝑛 } , 𝐮𝑛 𝐮𝑛 𝑛 𝐮𝑛
5) Calculate the mo d ified step-size 𝜇𝑞𝑖,𝑛 of 𝑞𝑖,𝑛 Taking the norm of both sides in Eq. (37), we get
𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛 𝐮𝑇
𝑛 ∙𝐮𝑛
𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤𝑛 , ∴ 𝜇𝑞𝑛 ≅ 2 . (39)
∆𝓍 𝐮𝑇𝑛 𝐮𝑛 𝐂𝑇 𝐂
𝐂𝑇 𝐮𝑛 𝑒𝑛
4.2 Mean square error performance of proposed
𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 . algorithm
𝑛 𝐮𝑇
𝑛 𝐮𝑛
mean square (AAS - NLMS) algorithm is under a Taking the expectation onto the noise in Eqs. (44)
few assumptions. and (45) with the condition at steady-state for
𝑛 closes to infinity, we get
Assumption 1: We consider that the noise sequence
of system 𝜼𝑛 is independent and identically 2
𝐸{ℰ𝑤𝑛 ∙ 𝑒𝑛 } = 𝐸{ℰ𝑤𝑛 (ℰ𝑤𝑛 + 𝜂𝑤𝑛 )} ≃ 𝐸{ℰ𝑤 𝑛
} (46)
distributed with variance of noise 𝛿 2 and zero mean.
and
Assumption 2: We consider that the noise sequence
of system 𝜼𝑛 is independent of 𝐱𝑛 , 𝐬𝑛 , 𝜀𝑛 , 𝜀𝑤𝑛 , 𝐸{𝑒𝑛2 } = 𝐸{(ℰ𝑤𝑛 + 𝜂𝑤𝑛 )2 }
and 𝜀𝑞𝑛 . 2
= 𝐸{ℰ𝑤 2
+ 2ℰ𝑤𝑛 𝜂𝑤𝑛 + 𝜂𝑤 }
𝑛
2 2
≃ 𝐸{ℰ𝑤𝑛 + 𝜉𝑤𝑛 } . (47)
Let us assume the estimate weight noise vector
𝜼𝑤𝑛 concerned with the tap – weight vector 𝐰𝑛 as 2
where 𝜉𝑤 𝑛
is the minimum MSE involved with 𝐰𝑛 .
Substituting Eqs. (46) and (47) into Eq. (43), we
𝜼𝑤𝑛 = 𝐰0 − 𝐰𝑛 . (40)
have
where 𝜼𝑤𝑛 = [ 𝜂𝑤0 𝜂𝑤1 … 𝜂𝑤𝑁−1 ]. 2 𝜇𝑤𝑛 ∅𝑛 ∙‖𝐱𝒏 ‖2 2 2
From Eq. (18), we can write the update weight 2𝐸{ℰ𝑤 𝑛
}= ∆𝓍
∙ (𝐮𝑇
∙ E{ℰ𝑤 𝑛
+ 𝜉𝑤 𝑛
}
𝑛 ∙𝐮𝑛 )
noise vector 𝜼𝑤𝑛+1 as
𝜇𝑤𝑛 ∅𝑛 ∙‖𝑥𝑛 ‖2 𝜇𝑤𝑛 ∅𝑛 ∙‖𝐱𝒏 ‖2 2
[2 − ∆𝓍
∙ 𝐮𝑇
] {𝐸𝑤𝑛2 } = ∙ (𝐮𝑇
∙ E{ 𝜉𝑤 𝑛
}
𝜼𝑤𝑛+1 = 𝜼𝑤𝑛 − (𝐰𝑛+1 − 𝐰𝑛 ) 𝑛 ∙𝐮𝑛 ∆𝓍 𝑛 ∙𝐮𝑛 )
𝜇𝑤 𝑛 ∙ ∅𝑛 ∙𝐱𝑛 ∙𝑒𝑛
𝜼𝑤𝑛+1 = 𝜼𝑤𝑛 − . (41) 2 𝜇𝑤𝑛 ∅𝑛 ‖𝐱𝑛 ‖2 𝐸{𝜉𝑤
2 }
∆𝓍(𝐮𝑇
𝑛 ∙𝐮𝑛 ) 𝐸{ℰ𝑤 }= 𝑛
(48)
𝑛 2∆𝓍(𝐮𝑇
𝑛 ∙𝐮𝑛 )− 𝜇𝑤𝑛 ∙∅𝑛 ∙‖𝐱 𝑛 ‖
2
From Assumption (4), the update 𝜼𝑞𝑛 in Eq. (52) ‖𝐮𝑻 ∙ 𝐂‖𝟐 2
2E{ℰ𝑞2𝑛 } = 𝜇𝒒𝒏 ∙ ∙ E {ℰ𝑞2𝑛 + 𝜉𝑞 }
can be calculated as (𝐮𝑇𝑛 ∙ 𝐮𝑛 ) 𝑛
2 𝟐 𝟐 2
𝛍𝑇 𝜇𝑞2𝑛 ∙‖𝐮𝑇 ∙𝐂‖ ∙𝑒 2𝑛 𝜇𝑞𝑛 ∙‖𝐮𝑻 ∙𝐂‖ 𝜇𝑞𝑛 ∙‖𝐮𝑻 ∙𝐂‖ ∙𝐄{𝜉𝑞 }
𝑛 ∙𝐶𝑛 ∙𝑒𝑛
2𝜼𝑞𝑛 ∙ 𝜇𝑞𝑛 ∙ = 2 E{ℰ𝑞2𝑛 } [2 − ]= 𝑛
. (58)
(𝐮𝑇𝑛 ∙𝐮𝑛 ) (𝐮𝑇 (𝐮𝑇
𝑛 ∙𝐮𝑛 ) (𝐮𝑇
𝑛 ∙𝐮𝑛 )
𝑛 ∙𝐮𝑛 )
2
‖𝐮𝑇 ∙𝐂‖ ∙𝑒𝑛2
2ℰ𝑞𝑛 ∙ 𝑒𝑛 = 𝜇𝑞𝑛 ∙ , (53) If 𝜇𝑞𝑛 is very small, we get
(𝐮𝑇
𝑛 ∙𝐮𝑛 )
𝟐
where ℰ𝑞𝑛 is given as 𝜇𝑞𝑛 ∙‖𝐮𝑻 ∙𝐂‖ ∙𝐄{𝜉𝑞2𝑛 }
∴ 𝜁𝑞 ≃ E{ℰ𝑞2𝑛 } = 2(𝐮𝑇
, (59)
𝑛 ∙𝐮𝑛 )
ℰ𝑞𝑛 = 𝜼𝑞𝑛 ∙ 𝐮𝑇𝑛 ∙ 𝐂 . (54)
where 𝜁𝑞 is the excess MSE concerned with 𝐪𝑖,𝑛 .
So, we determine that the a priori error 𝑒𝑛 is
involved with 𝐪𝑖,𝑛 as 5. Experimental results
In this section, we provide the experimental tests
𝑒𝑛 = ℰ𝑞𝑛 + 𝜂𝑞𝑛 . (55) in system identification by simulating the random
process. The input coloured signal for all
Taking the expectation into the noise in Eq. (53) and experiments comprises 5,000 samples of the signal
(55) at steady-state for 𝑛 → ∞ , we have generated in the system identification over 100
Monte Carlo trials by following [20].
E{ℰ𝑞𝑛 ∙ 𝑒𝑛 } = E{ℰ𝑞𝑛 ∙ (ℰ𝑞𝑛 + 𝜂𝑞𝑛 )} ≃ E{ℰ𝑞2𝑛 } (56)
𝑥𝑛 = 𝛼 ∙ 𝑥𝑛−1 + √1 − 𝛼 2 ∙ 𝜓𝑛 , (60)
E{ℰ𝑛2 } 2
= {(ℰ𝑞𝑛 + 𝜂𝑞𝑛 ) } ≃ E{ℰ𝑞2𝑛 + 𝜉𝑞2𝑛 } (57)
where 𝜓𝑛 denotes as a zero mean white Gaussian
where 𝜉𝑞2𝑛 is the minimum MSE involved with 𝐪𝑖,𝑛 . noise with unitary variance and 0.1 ≤ 𝛼 < 0.99.
Replacing Eqs. (56) and (57) into Eq. (53), we get
Figure.2 Learning curves of 𝜇𝑤 (𝑛) of tap-weight w𝑛 vector of proposed AAS-NLMS-SAF algorithm with the different
𝛼 = 0.1, 0.25, 0.75 and SNR = 40dB
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 274
Figure.3 Learning curves of 𝜇𝑞 (𝑛) of control points q𝑖,𝑛 vector of proposed AAS-NLMS-SAF algorithm the different
𝛼 = 0.1, 0.25, 0.75 and SNR = 40dB
Figure.4 Mean square error (MSE) of proposed ASS-NLMS-SAF algorithm compare with LMS-SAF [20] with the
different of initial step-size parameter using SNR = 40dB and 𝛼 = 0.10
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 275
Figure.5 Mean square error (MSE) in dB of proposed ASS-NLMS-SAF algorithm compare with LMS-SAF [20] with the
different of initial step-size parameter using SNR = 40dB and 𝛼 = 0.95
We consider the mean square error (MSE) Table 2. Spline adaptive filter based on the least mean
computed in dB as square algorithm (LMS-SAF) [20]
2
MSE𝑛 = 10 log (𝐸 {(𝑑𝑛 − 𝐮T𝑛 𝐂𝐪𝑖,𝑛 ) }) . (61) Initialize : 𝐰(0) = 𝐪(0) = 𝜑𝑤 . [1 0 … 0]𝑇
for 𝑛 = 0, 1, 2, … , 𝑁 − 1.
A 23-point LUT 𝒒0 is implemented for a nonlinear
memoryless target function that is interpolated with 1) To determine the tap-weight vector 𝐰𝑛
a uniform third degree spline and SAF model is used
as △ 𝑥 = 0.2 [4] and C is a Catmul-Rom spline as 𝐰𝑛+1 = 𝐰𝑛 + 𝜇𝑤 𝐮′𝑛 𝐂𝐪𝑖,𝑛 𝐱𝑛 𝑒𝑛 .
described in [2].
Initial parameters of all SAF model are as
2) To determine the tap-weight vector 𝐰𝑛 and the
follows: 𝐰 (0) = 𝜑𝑤 ∙ [1, 0, … , 0]T , where 𝜑𝑤 = control point vector 𝐪𝑖,𝑛 as
1 × 10−3 , 𝐪(0) = [1, 0, … , 0]T , SNR = 40dB,
length of filter is 5. For initial parameters for spline 𝐪𝑖,𝑛+1 = 𝐪𝑖,𝑛 + μ𝑞 𝐂 𝑇 𝐮𝑛 𝑒𝑛 .
adaptive filtering based on least mean square (LMS-
SAF) algorithm [20] are as: 𝜇𝑤 = 𝜇𝑞 = 0.025,
end
0.035, 0.050. Summary of LMS-SAF is shown in
Table 2.
Other parameters for proposed AAS-NLMS- 𝜇𝑤 (0), 𝜇𝑞 (0) at SNR = 40dB with the different 𝛼 in
SAF algorithm are as: 𝜇𝑤 (0) = 𝜇𝑞 (0) = 1.5 × 10−4 , (60) generated the input coloured signal. It is seen
1.5 × 10−2 , 2.5 × 10−2 , 3.5 × 10−2 , 5.5 × 10−2 . The that both learning curves of 𝜇𝑤𝑛 and 𝜇𝑞𝑛 converge
fixed parameters are as follows: 𝛼𝑤 = 𝛼𝑞 = 0.975 , to their equilibria despite 100-fold of initial step-size
𝛽𝑤 = 2.95 × 10−3 , 𝛽𝑞 = 1.95 × 10−3 , 𝛾 = 0.97. situations at steady-state.
Learning rates of step-size parameters 𝜇𝑤𝑛 of In terms of MSE performance, simulation results
tap-weight vector and 𝜇𝑞𝑛 of control points vector of shown for the proposed experiments with the two
proposed AAS-NLMS-SAF algorithm are shown in choices of parameter 𝛼 = 0.10, 0.95 which are
Figs. 2 and 3 with the different initial parameter of presented in Fig. 4 and Fig. 5, respectively. At
steady-state, the performance of proposed AAS-
NLMS-SAF algorithm closes to the noise power. In
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 276
addition, we notice that the performance of proposed Convergence Properties”, Signal Processing,
AAS-NLMS-SAF algorithm outperforms to Vol. 100, pp. 112-123, 2014.
converge faster and robust mechanism when [6] M. Scarpiniti, D. Comminiello, R. Parisi, and A.
compared with the LMS-SAF algorithm using the Uncini, “Novel Cascade Spline Architectures
variants of fixed step-size parameter. for the Identification of Nonlinear Systems”,
IEEE Transactions on Circuits and Systems I:
6. Conclusion Regular Papers, Vol. 62, No. 7, pp. 1825-1835,
2015.
In this paper, we propose a step-size approach in
[7] S. Scardapane, M. Scarpiniti, D. Comminiellow,
term of averaging of square error for spline adaptive
and A. Uncini, “Diffusion Spline Adaptive
filtering (AAS-NLMS-SAF). We describe how to
Filtering”, In: Proc. of European Signal
derive the proposed adaptive averaging step-size
Processing Conference, pp. 1498-1502, 2016.
algorithm with the method of normalised version of
[8] S. Guan and Z. Li, “Normalised Spline
LMS algorithm on spline adaptive filtering. By
Adaptive Filtering Algorithm for Nonlinear
using an estimation of autocorrelation between
System Identification”, Neural Processing
present estimated error and a priori estimated error,
Letter, V ol. 5, pp.1-13, 2017.
the adaptive averaging step-size scheme is proposed
[9] S. Guan and Z. Li, “Normalised Spline
on SAF. The convergence and stability analysis of
Adaptive Filtering Algorithm for Nonlinear
proposed AAS-NLMS-SAF algorithm examine in
System Identification”, Neural Processing
terms of mean square error and excess mean square
Letter, Vol. 46, Issue. 2, pp. 595-607, 2017.
error concerned with adaptive tap-weight FIR vector
[10] M. Scarpiniti, D. Commiiniello, R. Parisi, and
and control points vector in the adaptive LUT.
A. Uncini, “Nonlinear System Identification
Both the trajectories of adaptive step-size
using IIR Spline Adaptive Filters”, Signal
parameters can converge into each equilibrium in
Processing, Vol. 108, pp. 30-35, 2015.
spite of 100-fold initial variations. Learning curves
[11] C. Liu and Z. Zhang, “Set-membership
of MSE performance are illustrated to converge
N ormalised Least M-estimate Spline Adaptive
dramatically to steady-state in comparison with the
Filtering Algorithm in Impulsive Noise”,
existing LMS-SAF algorithm using the fixed step-
Electronics Letters, Vol. 54, No. 6, pp. 393-
size parameters.
395, 2018.
Especially, SAF can perform well with low-cost
[12] C. Liu, Z. Zhang, and X. Tang, “Sign
complexity beside the existing FIR structures.
Normalised Spline Adaptive Filtering
Because of the recursion form, SAF can be modified
Algorithms Against Impulsive Noise”, Signal
in many practical cases such as nonlinear channel
Processing, Vol. 148, pp. 234–240, 2018.
equalization, biomedical data analysis and control
[13] H.S. Lee, S.E. Kim, W. Lee, and W.J. Song, “A
applications.
Variable Step-size Diffusion LMS algorithm
for Distributed Estimation”, IEEE Transactions
References on Signal Processing, Vol. 63, No. 7, pp. 1808–
[1] A. Uncini, “Fundamentals of Adaptive Signal 1820, 2015.
Processing”, ser. Signals and Communication [14] S. Sitjongsataporn, “Advanced Adaptive DMT
Technology, Springer International Publishing, Equalisation: Algorithms and Implementation”,
Switzerland, 2015. LAP LAMBERT Academic Publishing, 2011.
[2] M. Scarpiniti, D. Comminiello, R. Parisi, and [15] L. Wang, Y. Cai, and R.C.de Lamare, “Low-
A. Uncini, “Nonlinear Spline Adaptive Complexity Adaptive Step- Size Constrained
Filtering”, Signal Processing, Vol. 93, Issue. 4, Constant Modulus SG-based Algorithms for
pp. 772-783, 2013. Blind Adaptive Beamforming”, In: Proc. of
[3] C. Liu, Z. Zhang, and X. Tang, “Sign International Conference on Acoustics, Speech,
Normalised Spline Adaptive Filtering and Signal Processing, pp. 2593-2596, 2008.
Algorithms Against impulsive Noise”, Signal [16] S. Sitjongsataporn and P. Yuvapoositanon,
Processing, Vol. 148, Issue. 6, pp. 234-240, “Low Complexity Adaptive Step-Size Filtered
2018. Gradient-based Per-Tone DMT Equalisation”,
[4] L. Ljung, “System Identification- Theory for In: Proc. of International Symposium on
the user”, Upper S ad dl e River, NJ, 1999. Circuits and Systems, pp.2526-2529, 2010.
[5] M. Scarpiniti, D. Comminiello, R. Parisi, and [17] S. Kalluri and G.R. Arce, “General Class of
A. Uncini, “Hammerstein Uniform Cubic Nonlinear Normalized Adaptive Filtering
Spline Adaptive Filtering: Learning and Algorithms”, IEEE Transactions on Signal
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26
Received: November 28, 2019. Revised: January 27, 2020. 277
International Journal of Intelligent Engineering and Systems, Vol.13, No.2, 2020 DOI: 10.22266/ijies2020.0430.26