0% found this document useful (0 votes)
2 views

Chebyshev_Functional_Link_Spline_Neural_Filter_for_Nonlinear_Dynamic_System_Identification

The document presents a novel Chebyshev Functional Link Spline Neural Filter (CFLSNF) aimed at improving the nonlinear fitting performance in system identification. It utilizes a spline activation function for enhanced approximation capabilities and eliminates hidden layers for computational efficiency. The CFLSNF-LMS algorithm is developed for weight updates, and a robust version is introduced to enhance performance in noisy environments, with experimental validation demonstrating its effectiveness.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Chebyshev_Functional_Link_Spline_Neural_Filter_for_Nonlinear_Dynamic_System_Identification

The document presents a novel Chebyshev Functional Link Spline Neural Filter (CFLSNF) aimed at improving the nonlinear fitting performance in system identification. It utilizes a spline activation function for enhanced approximation capabilities and eliminates hidden layers for computational efficiency. The CFLSNF-LMS algorithm is developed for weight updates, and a robust version is introduced to enhance performance in noisy environments, with experimental validation demonstrating its effectiveness.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 69, NO.

3, MARCH 2022 1907

Chebyshev Functional Link Spline Neural Filter for


Nonlinear Dynamic System Identification
Zhao Zhang and Jiashu Zhang

Abstract—In order to increase the nonlinear fitting


performance of functional link neural network (FLNN), a novel
chebyshev functional link spline neural filter (CFLSNF) to apply
in system identification is proposed. Compared with the weak
nonlinearity and boundedness of the fixed activation function
(e.g., sigmoid and tanh), CFLSNF has stronger nonlinear approx-
imation ability than FLNN due to the flexible interpolation ability
of spline activation function. At the same time, the proposed
CFLSNF eliminates the hidden layers by using Chebyshev poly-
nomial to extend the input space into high dimensions, which
shows certain computational advantages compared with the arti- Fig. 1. Two different activation functions: (a) Hyperbolic Tangent Function
ficial neural network (ANN) structures. Moreover, to update the and (b) an example of an arbitrarily generated third order spline function.
weights of the CFLSNF, the CFLSNF-LMS is also developed. The
stability conditions and computational complexity are studied.
Besides, in order to make CFLSNF structure suitable for impul- To achieve the purpose of reducing the computational
sive noise interference environment, a robust algorithm based
on maximum versoria criterion is also proposed. Finally, the complexity, an alternative single-layer structure called FLNN
validity of the proposed architecture and algorithm are verified was presented [1], [2]. The biggest advantage of FLNN is
by experiments. that it reduces the computational burden by extending the
Index Terms—FLNN, Chebyshev polynomial, spline activation input signal into higher dimensional space through functional
function, nonlinear system identification, maximum versoria extension. Based on different orthogonal polynomial exten-
criterion. sion methods, FLNNs mainly include trigonometric FLNN
(TFLNN) [2], Chebyshev FLNN (CFLNN) [3], Legendre
FLNN (LFLNN) [4] and others. In recent years, based on
different topological structures and combinations, various vari-
I. I NTRODUCTION
ants of FLNN structures and improved algorithms have been
CCURATE identification of complex dynamic objects
A is a key issue in control theory. The lack of prior
information of the unknown plant makes this problem be a
developed [5]–[10]. These structures and algorithms have been
realized in active noise control, nonlinear echo cancellation
and other applications [11]–[15]. We know that the fixed acti-
challenge in practical applications. As a powerful learning vation function tanh(·) is generally used in FLNNs. From
and simulation tool, artificial neural networks (ANNs) have Fig. 1(a), we can intuitively see that the arctangent activa-
shown the ability to approximate arbitrary complex dynamic tion function is almost approximate to the linear function
nonlinear relationship, especially when the prior information between the intervals (−1.4,1.4) (the solid red line part), when
of the system is insufficient. Representative multilayer per- x > 3 and x < −3, the corresponding arctangent activation
ceptron (MLP) and recurrent neural network (RNN) have functions are approximately 1 and −1, respectively. In other
been successfully verified in communication, biomedical and words, the range of the arctangent function is fixed between
other fields. However, they have one shortcoming in common: −1 and 1, and its nonlinear approximation ability is limited.
the computational complexity increases with the increase of Therefore, it is necessary to find the activation function with
iteration times in training process. higher nonlinear approximation ability.
Manuscript received August 20, 2021; accepted September 9, 2021. Date of A novel architecture called Chebyshev functional link spline
publication September 13, 2021; date of current version March 15, 2022. This neural filter (CFLSNF) has been presented to improve the
work was supported in part by the National Science Foundation of China under nonlinear approxiation capability of FLNN in this brief.
Grant 62071396. This brief was recommended by Associate Editor S. Wang.
(Corresponding author: Jiashu Zhang.) Compared with FLNN with fixed activation function, the
Zhao Zhang is with the School of Information Science and Technology, proposed CFLSNF can flexibly approximate many nonlinear
Southwest Jiaotong University, Chengdu 611756, China (e-mail: curves by adopting adaptive third-order spline interpolation
[email protected]).
Jiashu Zhang is with the School of Computing and Artificial function [16]–[18]. An arbitrarily generated spline function of
Intelligence, Southwest Jiaotong University, Chengdu 611756, China (e-mail: third order is shown in Fig. 1(b), it can be seen that the non-
[email protected]). linear fitting ability of the spline activation function is stronger
Color versions of one or more figures in this article are available at
https://doi.org/10.1109/TCSII.2021.3111919. than that of the tanh(·). At the same time, the CFLSNF-LMS
Digital Object Identifier 10.1109/TCSII.2021.3111919 algorithm is developed to update the weights of CFLSNF. In
1549-7747 
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on October 26,2023 at 07:20:05 UTC from IEEE Xplore. Restrictions apply.
1908 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 69, NO. 3, MARCH 2022

Correspondingly, the local parameter u and span index b


can be computed by
   
z(k) z(k) z(k) (Nq − 1)
u= − , b= − (4)
x x x 2
where x is the uniform interval between two adjacent knots,
· is the floor operator and Nq is the length of the spline
lookup table.
Therefore, the output of CFLSNN is given by

y(k) = uT (k)Cqb (k) (5)


Fig. 2. Schematic diagram of the proposed CFLSNF.
where u(k) is defined as u = [u3 , u2 , u, 1]T and qb (k) =
[qb (k), qb+1 (k), qb+2 (k), qb+3 (k)]T in which qb (k) is the b-th
addition, based on maximum versoria criterion [19], [20], a knot point of spline control points. C is a third-order spline
corresponding robust algorithm is deduced so as to improve basis matrix, and CR spline basis matrix [16] is used uniformly
the convergence performance of CFLSNF structure in non- in this brief.
Gaussian noise environment. Finally, simulation results show In the learning process, the weights and control parame-
that the proposed structure and algorithm are effective. ters of CFLSNF are updated by means of stochastic gradient
adaptive method, with a cost function
II. P ROPOSED CFLSNF AND CFLSNF-LMS A LGORITHM 1 
J(k) = E e2 (k) (6)
The schematic diagram of the proposed CFLSNF is given 2
in Fig. 2, which consists of a chebyshev polynomial expansion where e(k) = d(k) − y(k) is the error signal in which d(k)
and an adaptive spline activation function. is the desired output of the unknown system, y(k) is the out-
The p-th order chebyshev polynomials is denoted by Tp (x), put of adaptive CFLSNF identifier. In each iteration, the input
and it can be generated by Eq. (1), where p is the order and signal passes through the CFLSNF identifier to obtain the
−1 ≤ x ≤ 1. output error and the error value is used to in the BP (back
propagation) algorithm to minimize the cost function in (6).
Tp+1 (x) = 2xTp (x) − Tp−1 (x) (1) The weights and the control points are updated as follows:
The zeroth and the first order chebyshev polymials are ∂J(k)
w(k + 1) = w(k) + w(k) = w(k) − μw (7)
respectively given by, T0 (x) = 1, T1 (x) = x. In order to reduce ∂w(k)
the complexity of proposed CFLSNF, we choose p = 2 in this ∂J(k)
brief, assume that x(k) = [x(k), x(k − 1), . . . , x(k − m + 1)]T , qb (k + 1) = qb (k) + qb (k) = qb (k) − μq (8)
∂qb (k)
m is the input dimension. By using the chebyshev expansion,
the input pattern can be expanded by following: where μw and μq are learning rates. The partial derivation of
J(k) with respect to the w(k) and qb (k) are
 T
φ(k) = φ1 (k), φ2 (k), . . . , φN1 (k)
 ∂y(k) ∂y(k) ∂u ∂z(k)
∇w J(k) = −e(k) = −e(k)
= 1, T1 (x(k)), T1 (x(k − 1)), . . . , T1 (x(k − m + 1)) ∂w(k) ∂u ∂z(k) ∂w(k)
T1 (x(k))T1 (x(k − 1)), T1 (x(k))T1 (x(k − 2)), . . . , 1 T
= −e(k) u (k)Cqb (k)φ(k) (9)
T1 (x(k − m))T1 (x(k − m + 1)), T2 (x(k)), T2 (x(k − 1)), x
T ∂y(k)
. . . , T2 (x(k − m + 1)) ∇q J(k) = −e(k) = −e(k)CT u (10)
∂qb (k)

= 1, x(k), x(k − 1), . . . , x(k − m + 1), x(k)x(k − 1), where u(k) = [3u3 , 2u2 , 1, 0]T .
x(k)x(k − 2), . . . , x(k − m)x(k − m + 1), 2x(k)2 Therefore, the updating rules using BP iterative learning
T algorithm are written as
− 1, 2x(k − 1)2 − 1, . . . , 2x(k − m + 1)2 − 1 (2)
1 T
w(k + 1) = w(k) + μw e(k) u (k)Cqb (k)φ(k) (11)
where N1 represents the number of dimensions of the x
m-dimensional input signal extended into a higher-dimensional qb (k + 1) = qb (k) + μq e(k)CT u (12)
space. The length of N1 equals to 1 + m + m(m+1)2 .
Therefore, the output z(k) in the vector form is obtained by: Different from the traditional activation function (e.g.,
tanh(·), sigmoid(·)), the proposed CFLSNF introduces the
z(k) = φ T (k)w(k) (3) adaptive spline interpolation function as the activation func-
tion, therefore, it has better nonlinear approximation ability
where w(k) = [w1 (k), w2 (k), . . . , wN1 (k)]T is the chebyshev than the traditional FLNN. The learning process based on
coefficients vector. proposed CFLSNF is given in Table I.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on October 26,2023 at 07:20:05 UTC from IEEE Xplore. Restrictions apply.
ZHANG AND ZHANG: CFLSNF FOR NONLINEAR DYNAMIC SYSTEM IDENTIFICATION 1909

TABLE I
L EARNING P HASE OF P ROPOSED CFLSNF IV. ROBUST CFLSNF-MVC A LGORITHM
To maintain the convergence property of the proposed
CFLSNF in the non-Gaussian noise environment, a robust
CFLSNF-MVC algorithm is derived by maximizing the ver-
soria function:
 
1 1
J(k) = E   (19)
2 1 + e2 (k)
The update recursive equation of weights and control points
of CFLSNF-MVC algorithm are derived by removing the
expected symbols and using the gradient ascent method.
1 T
w(k + 1) = w(k) + ηw e(k) u (k)Cqb (k)φ(k) (20)
III. C ONVERGENCE P ROPERTIES AND C OMPUTATIONAL x
C OMPLEXITY qb (k + 1) = qb (k) + ηq e(k)CT u (21)
A. Performance Analysis where ηw (k) = μw (1+|e12 (k)|)2 and ηq (k) = μq (1+|e12 (k)|)2 can
It is necessary to analyze the bounds of step-sizes μw , μq be referred to as the variable step-size implementation of
to guarantee convergence performance of CFLSNF-LMS. CFLSNF-LMS algorithm. Different from the CFLSNF-LMS
Expanding the Taylor series for the error e(n + 1) near algorithm, when the error e(k) appears outlier, the values of
instantaneous n and stopping at the first order, we can obtain ηw and ηq are almost zero, (20) and (21) will not be updated.
This is the operation mechanism of CFLSNF-MVC algorithm
∂e(k)
e(k + 1) = e(k) + w(k) + h.o.t. (13) to ensure convergence in non-Gaussian environment.
∂wT (k)
∂e(k)
e(k + 1) = e(k) + T qb (k) + h.o.t (14) V. E XPERIMENT R ESULTS
∂qb (k)
A. Dynamic System Identification Scenario
where h.o.t means high-order terms. After some simple oper- We evaluate the performance of proposed CFLSNF, SNN
ations, we can obtain (spline neural network, a multi layer perceptron (MLP)
2 model which based on the adaptive spline activation func-
e(k + 1) = e(k) − μw e(k) uT (k)Cqb φ(k)2
2
 tion) [16] and FLNN in nonlinear dynamic system identifi-
= 1 − μw uT (k)Cqb φ(k)2 e(k) (15) cation. Performance comparison was carried out in terms of
 2  the desired output of the unknown plant, the estimated output
e(k + 1) = 1 − μq CT u(k) e(k) (16) of the NN model and the estimated error. The SNN used has
a structure of {2 − 10 − 1}. A measure used to evaluate the
To meet the condition |e(k+1)| < |e(k)|, the following bounds performance of identifier is the normalized mean square error
of μw and μq should be satisfied: (NMSE) and is defined as
2  
1 
TD
0 < μw <  2 (17)
 C2 qb (k) φ(k)2
C3 qb (k) + 2u NMSE(dB) = 10 × log 10 2 [y(k) − ŷ(k)]2 (22)
σ TD
k=1
2
0 < μq <   (18) where y(k) and ŷ(k) define the output of plant and the output
C u(k)2
T
of the NN model, seperately and σ 2 denotes variance of the
where 
Ci ∈ R1×4 is the i-th row of the C matrix. plant output sequence in the training duration TD = 600. The
input signal used in this section is given as
⎧ 
B. Computational Complexity ⎨ sin 2π k for 0 < k ≤ 250
u(n) =
250   (23)
We assume a three-layer SNN structure, the number ⎩ 0.8 sin 2π k
+ 0.2 sin 2π k
for k > 250.
250 25
(excluding the threshold unit) of input layer, hidden layer
and output layer are n0 , n1 and n2 , respectively. The weights 1) Scenario 1: This experiment is to identify the following
and control points that SNN needs to update in one iteration difference equation
are n0 n1 + n1 n2 + n1 + n2 and 4(n1 + n2 ), respectively. ya (k)ya (k−1)[ya (k)+0.5][ya (k−1)−1]
CFLSNF only updates N1 weights and 4 control points. The ya (k+1) = +u(k) (24)
1+y2a (k)+y2a (k−1)
computation of CFLSNF is greatly reduced compared with
SNN since hidden layer is not exist. Due to the introduction A series-parallel model to identify above plant is given by
of spline interpolation, each iteration will have four control ŷa (k + 1) = N (ya (k), ya (k − 1)) + u(k) (25)
points updated. Compared with tanh(·) operation, the com-
putational amount of CFLSNF is slightly higher than that of N (·) is the neural network model to approximate the nonlin-
FLNN. ear function in (24). The parameters are given as: CFLSNF

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on October 26,2023 at 07:20:05 UTC from IEEE Xplore. Restrictions apply.
1910 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 69, NO. 3, MARCH 2022

Fig. 3. Identification of the nonlinear plant (scenario 1): (a) CFLSNF; Fig. 4. Identification of the nonlinear plant (scenario 2): (a) CFLSNF;
(b) FLNN and (c) SNN. (b) FLNN and (c) SNN.

(μw = 0.15, μq = 0.8, x = 0.2, m = 2), FLNN (μw = CFLSNF, FLNN and SNN were −19.6180dB, −18.7753dB
0.15, m = 2) and SNN (μw = 0.05, μq = 0.05, x = 0.2). and −12.4007dB, respectively. From the above two experi-
The NMSE of CFLSNF, FLNN and SNN were −32.7949dB, mental results, we can see that the NMSE of CFLSF is lower
−31.2637dB and −24.5304dB, respectively. From the above than that of FLNN, which verifies the performance of the
two experimental results, we can see that the NMSE of proposed structure.
CFLSNF is lower than that of FLNN.
2) Scenario 2: In the scenario 2, the target system is B. System Identification Scenario Under Non-Gaussian
described by Environment
 
ya (k)ya (k − 1)ya (k − 2)u(k − 1) ya (k − 2) − 1 To further test the robustness of the proposed CFLSNF-
ya (k) = (26) MVC algorithm, the unknown nonlinear identification system
1 + y2a (k − 1) + y2a (k − 2)
is given below
the series-parallel model for identification above plant is
  y(k − 1)
ŷa (k + 1) = N ya (k), ya (k − 1), u(k), u(k − 1) (27) y(k) = + x3 (k) (28)
1 + y2 (k − 1)
The parameters are given as: CFLSNF (μw = 0.15, μq = where y(k) is the output of model and x(k) is the input signal
0.1, x = 0.2, m = 5), FLNN (μw = 0.15, m = 5) and satisfying uniform distribution. The background noise is given
SNN (μw = 0.1, μq = 0.1, x = 0.2). The NMSE of as a zero-mean, white Gaussian noise with a variance of 0.001.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on October 26,2023 at 07:20:05 UTC from IEEE Xplore. Restrictions apply.
ZHANG AND ZHANG: CFLSNF FOR NONLINEAR DYNAMIC SYSTEM IDENTIFICATION 1911

R EFERENCES
[1] Y. Pao, Adaptive Pattern Recognition and Neural Networks. Boston,
MA, USA: Addison-Wesley, 1989.
[2] Y.-H. Pao and S. M. Phillips, “The functional link net and learning
optimal control,” Neurocomputing, vol. 9, no. 2, pp. 149–164, 1995.
[3] J. C. Patra and A. C. Kot, “Nonlinear dynamic system identification
using Chebyshev functional link artificial neural networks,” IEEE Trans.
Syst. Man, Cybern. B, Cybern., vol. 32, no. 4, pp. 505–511, Aug. 2002.
[4] J. C. Patra and C. Bornand, “Nonlinear dynamic system identification
using legendre neural network,” in Proc. IEEE Int. Joint Conf. Neural
Netw. (IJCNN), 2010, pp. 1–7.
[5] H. Zhao and J. Zhang, “Functional link neural network cascaded with
chebyshev orthogonal polynomial for nonlinear channel equalization,”
Signal Process., vol. 88, no. 8, pp. 1946–1957, 2008.
[6] H. Zhao and J. Zhang, “Nonlinear dynamic system identification
Fig. 5. Performance comparison between robust CFLSNF-MVC, using pipelined functional link artificial recurrent neural network,”
CFLSNF-LMS, GFLANN-LMS, AEFLANN-LMS and FLNN-LMS under the Neurocomputing, vol. 72, nos. 13–15, pp. 3046–3054, 2009.
condition of impulsive noise interference. [7] M. C. Nguyen, “Nonlinear adaptive filter based on pipelined bilinear
function link neural networks architecture,” in Intelligent Systems and
Networks: Selected Articles from ICISN, Vietnam. Singapore: Springer,
2021, p. 256.
In order to test the robustness of CFLSNF-MVC algorithm to [8] T. Deb, D. Ray, and N. V. George, “Design of nonlinear filters using
impulsive noise, the following impulsive models were used at affine projection algorithm based exact and approximate adaptive expo-
nential functional link networks,” IEEE Trans. Circuits Syst. II, Exp.
2000, 4000 and 6000 iterations: Briefs, vol. 67, no. 11, pp. 2757–2761, Nov. 2020.
[9] M. Cui, H. Liu, Z. Li, Y. Tang, and X. Guan, “Identification of
ϑ(k) = b(k)vb (k) (29) Hammerstein model using functional link artificial neural network,”
Neurocomputing, vol. 142, pp. 419–428, Oct. 2014.
[10] H. Zhao and J. Zhang, “Pipelined Chebyshev functional link artificial
where b(k) is an independent identically distributed Bernoulli recurrent neural network for nonlinear adaptive filter,” IEEE Trans. Syst.
random sequence with a probability of 0.5 and vb (n) is Man, Cybern. B, Cybern., vol. 40, no. 1, pp. 162–172, Feb. 2010.
an independent identically distributed Gaussian sequence [11] S. Zhang and W. X. Zheng, “Recursive adaptive sparse exponential
functional link neural network for nonlinear AEC in impulsive noise
with mean zero and variance 1000. The mean square environment,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 9,
error (MSE) curves comparing robust CFLSNN-MVC with pp. 4314–4323, Sep. 2018.
CFLSNF-LMS, GFLANN-LMS [21], AEFLANN-LMS [22] [12] D. C. Le, J. Zhang, and Y. Pang, “A bilinear functional link artificial
neural network filter for nonlinear active noise control and its stability
and FLNN-LMS are shown in Fig. 5. The parameters are given condition,” Appl. Acoust., vol. 132, pp. 19–25, Mar. 2018.
as: robust CFLSNF-MVC (m = 5, μw = 0.01, μq = 0.005), [13] D. C. Le, J. Zhang, and D. Li, “Hierarchical partial update generalized
CFLSNF-LMS (m = 5, μw = 0.01, μq = 0.01), GFLANN- functional link artificial neural network filter for nonlinear active noise
control,” Digit. Signal Process., vol. 93, pp. 160–171, Oct. 2019.
LMS (m = 5, P = 1, μw = 0.02), AEFLANN-LMS [14] M. Li, Z. Cai, Y. Yao, C. Xu, Y. Jin, and X. Wang, “Complex-valued
(m = 5, B = 2, μw = 0.006, μa = 0.01) and FLNN-LMS pipelined chebyshev functional link recurrent neural network for joint
(m = 5, μw = 0.01). As can be seen, the steady-state error compensation of wideband transmitter distortions and impairments,”
IEEE Access, vol. 8, pp. 159828–159838, 2020.
based on CFLSNF structures are lower than that of FLNN,
[15] A. Baliyan and M. S. Kumar, “Efficient prediction of short term load
GFLANN and AEFLANN at the same initial convergence rate using Chebyshev functional link artificial neural network,” in Proc.
and the robust CFLSNF-MVC shows faster convergence rate IEEE Int. Conf. Innov. Inf. Embedded Commun. Syst. (ICIIECS), 2015,
than other algorithms under the condition of impulsive noise pp. 1–5.
[16] P. Campolucci, F. Capperelli, S. Guarnieri, F. Piazza, and A. Uncini,
interference. “Neural networks with adaptive spline activation function,” in Proc.
IEEE 8th Mediterr. Electrotechn. Conf. Ind. Appl. Power Syst. Comput.
Sci. Telecommun. (MELECON), vol. 3, 1996, pp. 1442–1445.
VI. C ONCLUSION [17] L. Yang, J. Liu, R. Sun, R. Yan, and X. Chen, “Spline adaptive fil-
ters based on real-time over-sampling strategy for nonlinear system
In this brief, we proposed a novel single-layer NN filter identification,” Nonlinear Dyn., vol. 103, no. 1, pp. 657–675, 2021.
structure called Chebyshev functional link spline neural fil- [18] M. Rathod, V. Patel, and N. V. George, “Generalized spline nonlinear
adaptive filters,” Exp. Syst. Appl., vol. 83, pp. 122–130, Oct. 2017.
ter for nonlinear dynamic system identification. Compared
[19] F. Huang, J. Zhang, and S. Zhang, “Maximum versoria criterion-based
with FLNN-based structures, CFLSNF has stronger nonlinear robust adaptive filtering algorithm,” IEEE Trans. Circuits Syst. II, Exp.
approximation ability due to the flexible interpolation ability Briefs, vol. 64, no. 10, pp. 1252–1256, Oct. 2017.
of spline activation function. Furthermore, the stability con- [20] Z. Zhang, S. Zhang, and J. Zhang, “Robust weight-constraint decorre-
lation normalized maximum versoria algorithm,” in Proc. IEEE 9th Int.
ditions and computational complexity were discussed. The Workshop Signal Design Appl. Commun. (IWSDA), 2019, pp. 1–4.
nonlinear approximation capability of CFLSNF is verified with [21] G. L. Sicuranza and A. Carini, “A generalized flann filter for nonlinear
SNN and FLNN in the scenario of dynamic system identifica- active noise control,” IEEE Trans. Audio, Speech, Language Process.,
vol. 19, no. 8, pp. 2412–2417, Nov. 2011.
tion. In addition, based on maximum versoria criterion, robust
[22] V. Patel, V. Gandhi, S. Heda, and N. V. George, “Design of adaptive
CFLSNF-MVC algorithm is deduced. Finally, the convergence exponential functional link network-based nonlinear filters,” IEEE Trans.
performance of CFLSNF-MVC is verified under condition of Circuits Syst. I, Reg. Papers, vol. 63, no. 9, pp. 1434–1442, Sep. 2016.
impulsive noise interference. Future work may be extend the [23] S. Zhang, J. Zhang, W. X. Zheng, and H. C. So, “Widely linear complex-
valued estimated-input LMS algorithm for bias-compensated adaptive
structure proposed in this brief to complex and quaternion filtering with noisy measurements,” IEEE Trans. Signal Process., vol. 67,
fields [23]. no. 13, pp. 3592–3605, Jul. 2019.

Authorized licensed use limited to: NATIONAL INSTITUTE OF TECHNOLOGY WARANGAL. Downloaded on October 26,2023 at 07:20:05 UTC from IEEE Xplore. Restrictions apply.

You might also like