0% found this document useful (0 votes)
19 views24 pages

Applsci 14 05493 v2

Uploaded by

luzpachecobl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views24 pages

Applsci 14 05493 v2

Uploaded by

luzpachecobl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

applied

sciences
Article
Research on Kalman Filter Fusion Navigation Algorithm
Assisted by CNN-LSTM Neural Network
Kai Chen * , Pengtao Zhang, Liang You and Jian Sun

Equipment Management and Unmanned Aerial Vehicles Engineering College, Air Force Engineering University,
Xi’an 710051, China; [email protected] (P.Z.); [email protected] (L.Y.); [email protected] (J.S.)
* Correspondence: [email protected]

Abstract: In response to the challenge of single navigation methods failing to meet the high precision
requirements for unmanned aerial vehicle (UAV) navigation in complex environments, a novel
algorithm that integrates Global Navigation Satellite System/Inertial Navigation System (GNSS/INS)
navigation information is proposed to enhance the positioning accuracy and robustness of UAV
navigation systems. First, the fundamental principles of Kalman filtering and its application in
navigation are introduced. Second, the basic principles of Convolutional Neural Networks (CNNs)
and Long Short-Term Memory (LSTM) networks and their applications in the navigation domain
are elaborated. Subsequently, an algorithm based on a CNN and LSTM-assisted Kalman filtering
fusion navigation is proposed. Finally, the feasibility and effectiveness of the proposed algorithm are
validated through experiments. Experimental results demonstrate that the Kalman filtering fusion
navigation algorithm assisted by a CNN and LSTM significantly improves the positioning accuracy
and robustness of UAV navigation systems in highly interfered complex environments.

Keywords: convolutional neural network; long short-term memory network; Kalman filter; fusion
navigation algorithm; positioning accuracy

1. Introduction
Citation: Chen, K.; Zhang, P.; You, L.;
With the rapid advancement of unmanned aerial vehicle (UAV) technology [1], the
Sun, J. Research on Kalman Filter
application domains of UAVs continue to expand, posing higher demands on the accuracy
Fusion Navigation Algorithm
and stability of UAV navigation systems. Traditional UAV navigation algorithms often
Assisted by CNN-LSTM Neural
Network. Appl. Sci. 2024, 14, 5493.
rely on single-sensor data sources such as GPS and Inertial Measurement Units (IMUs) [2].
https://doi.org/10.3390/app14135493
However, in complex environments, these sensor data are susceptible to noise, interference,
and errors, leading to a decrease in navigation accuracy. To ensure that UAVs can efficiently
Academic Editor: Douglas and accurately execute tasks in complex environments, the performance of their navigation
O’Shaughnessy
systems becomes crucial. Therefore, research on UAV navigation algorithms based on the
Received: 23 May 2024 fusion of multisensor data holds significant practical significance and application value.
Revised: 16 June 2024 During the execution of tasks, UAV navigation systems typically adopt a fusion
Accepted: 21 June 2024 navigation approach integrating the Global Navigation Satellite System (GNSS) and Inertial
Published: 25 June 2024 Navigation System (INS) [3]. While GNSS offers long-term high-precision capabilities and
cost-effectiveness, its inherent drawback lies in susceptibility to severe electromagnetic
interference in battlefield environments, leading to disturbances in the GNSS receiver
signals. Conversely, INS provides a higher sampling rate, enabling continuous signal
Copyright: © 2024 by the authors. output for recursive estimation. However, when used alone, the INS system’s navigation
Licensee MDPI, Basel, Switzerland.
computation results may suffer from increased errors due to noise introduction through
This article is an open access article
integration operations, leading to divergence over time. The advantages and disadvantages
distributed under the terms and
of INS and GNSS are complementary. Integrating the strengths of both technologies
conditions of the Creative Commons
provides a continuous, high-bandwidth, comprehensive, and high-precision navigation
Attribution (CC BY) license (https://
solution. This integration not only overcomes performance issues of individual sensors
creativecommons.org/licenses/by/
4.0/).
but also yields a system performance surpassing that of a single sensor. Therefore, in

Appl. Sci. 2024, 14, 5493. https://doi.org/10.3390/app14135493 https://www.mdpi.com/journal/applsci


Appl. Sci. 2024, 14, 5493 2 of 24

practical navigation computations, employing Kalman-filter-based theory to integrate INS


computed data with GNSS information offers a more continuous and reliable navigation
solution, enhancing navigation parameter computation results.
Kalman filtering [4], as a classical estimation theory method, is widely employed
in UAV navigation systems. It estimates and corrects the state of the UAV through pre-
diction and update steps, thereby enhancing navigation accuracy. El-Sheimy [5] and
others proposed that under the assumption of process and measurement noises following
zero-mean Gaussian distributions with known covariance matrices, the Kalman filter can
achieve optimal estimation solutions. However, this assumption does not always hold in
practical systems. Therefore, some scholars have proposed alternative methods from the
perspectives of noise modeling and adaptive estimation.
The first method involves a thorough analysis of the composition mechanism of
noise and utilizes mathematically interpretable models to accurately describe the noise
characteristics. Kalman filtering often adopts such mechanistic models as the core models
of the system during state estimation. For example, Nirmal et al. [6] ingeniously utilized
Allan variance to analyze noise in IMUs in their 2016 study and successfully identified
different noise components within the IMU. It is noteworthy that the dynamic Allan
variance [7] was originally designed to assess the stability of atomic clocks but was later
extended to capture nonstationary components in signals. Although this noise analysis
method can provide targeted explanations for each system, its generalizability is relatively
weak and is significantly affected by sensor differences, thus requiring further refinement.
Recently, scholars have proposed an innovative nonlinear optimization method [8–10],
which combines gradient search and Newton search strategies, aiming to more accurately
identify noise using Gaussian activation functions.
Another strategy focuses on the adaptive estimation of noise based on data charac-
teristics. In this field, scholars have proposed various adaptive techniques, including but
not limited to Adaptive Kalman Filtering [11,12] (AKF), Adaptive Finite Impulse Response
Filters [13], and Adaptive Square Root EKF [14]. The common goal of these methods is to
intelligently adjust the parameters or structural elements of the system model according
to changes in system performance and different operating conditions. However, tradi-
tional analytical methods often struggle to extract valuable features when dealing with
highly nonlinear or noisy data, as they may overlook certain key information during the
estimation process.
The core challenge in system modeling lies in how to simplify its structure as much
as possible while ensuring model accuracy. In practical applications, identifying noise
model parameters is particularly challenging, primarily because the noise generated by
actual systems is often highly complex. These noises do not follow a single distribution
pattern but rather consist of mixed distributions formed by multiple distributions [15,16].
Therefore, it is difficult to accurately describe this highly nonlinear relationship solely from
a mechanistic or data feature perspective.
However, with the significant improvement in computer computational power, ma-
chine learning methods have regained widespread attention from researchers. Artificial
Neural Networks (ANNs) have demonstrated outstanding fitting capabilities [17] when
dealing with highly nonlinear problems. Particularly, derivative models based on Recurrent
Neural Networks (RNNs), such as Long Short-Term Memory (LSTM) networks [18] and
Gated Recurrent Units (GRUs) [19], have gained favor among many scholars for handling
highly nonlinear problems.
ANNs offer robust solutions for system modeling and noise parameter identification
within the Kalman filtering framework. Currently, researchers are delving into the integra-
tion of Kalman filtering and neural networks, focusing primarily on two subfields: external
interaction fusion of Kalman filtering and neural networks and internal deep fusion of
Kalman filtering and neural networks. Research in these two subfields holds promise for
providing new insights and methods for handling complex systems and noise issues. By
combining neural networks with Kalman filtering, it is possible to fully leverage the advan-
Appl. Sci. 2024, 14, 5493 3 of 24

tages of neural networks in handling complex nonlinear problems and extracting features,
as well as the benefits of Kalman filtering in state estimation and noise suppression. Partic-
ularly, the combination of Convolutional Neural Networks (CNNs) and LSTM networks
demonstrates significant advantages in sequence data processing and feature extraction.
CNNs automatically learn and extract spatial features from input data, while LSTMs excel
in handling data with temporal dependencies, capturing long-term dependencies, thereby
enhancing the accuracy and stability of UAV navigation.
Therefore, this paper proposes a Kalman filtering UAV fusion navigation algorithm
assisted by CNN and LSTM. The algorithm extracts spatial features from multiple sensor
data using CNNs, processes temporal data using LSTMs, and integrates the extracted
feature information into the Kalman filtering framework. This algorithm effectively utilizes
the complementary nature of multisensor data to improve the accuracy and robustness of
UAV navigation systems.
This paper first introduces the basic principles of Kalman filtering and its current
applications in UAV navigation. Then, it elaborates on the design and implementation
process of the Kalman filtering UAV fusion navigation algorithm assisted by a CNN
and LSTM, including data preprocessing, feature extraction, and the construction of the
neural network framework. Finally, the effectiveness and performance of the algorithm are
validated through experiments, and the experimental results are analyzed and discussed.
Through this research, it is hoped to provide new insights and methods for the development
of UAV navigation algorithms, thereby promoting the further application and development
of UAV technology.

2. Kalman Filtering GNSS/INS Fusion Navigation Basic Principles


2.1. Basic Principles of Kalman Filtering
The Kalman filtering algorithm [20] minimizes mean square error as its core estima-
tion principle, cleverly combining current observation data with the previous moment’s
predicted value to achieve optimal estimation at the current moment. The primary formula
of the Kalman filter is as follows:
Prediction Equation:
 −
Xt = FXt+−1 + BUt
(1)
Pt− = FPt+−1 F T + Q
State Update Equation:

∆Zt = Zt − HXt−


 − T
 St = HPt−1 H + R


Kt = Pt−1 H St−
− T (2)
Xt+ = Xt− + Kt ∆Zt




Pt = ( I − Kt H ) Pt−−1
 +

In this context, X represents the initial state, P denotes the covariance matrix indicating
uncertainty, B is the control matrix, U is the control vector, F stands for the system transition
matrix representing the system’s recursive process, Q represents process noise, H is the
sensor transformation matrix, Z is the sensor measurement vector, R denotes sensor noise,
and K signifies the Kalman gain.
The Kalman filter algorithm is typically categorized into two processing modes: loosely
coupled and tightly coupled [21].

2.1.1. Loosely Coupled


Loosely coupled [22] processing, during the state prediction process, relies solely
on the sensor’s output data as observation values to update the predicted values. This
approach is logically straightforward and practical, allowing for the fusion of multiple
sensors to enhance accuracy. For instance, using an IMU as the source of raw data for
prediction computation, while utilizing the GNSS positioning results as observation values
The Kalman filter algorithm is typically categorized into two processing modes:
loosely coupled and tightly coupled [21].

2.1.1. Loosely Coupled


Loosely coupled [22] processing, during the state prediction process, relies solely on
Appl. Sci. 2024, 14, 5493 the sensor’s output data as observation values to update the predicted values. 4 of 24 This ap-
proach is logically straightforward and practical, allowing for the fusion of multiple sen-
sors to enhance accuracy. For instance, using an IMU as the source of raw data for predic-
tionupdates.
for error correction computation, while utilizing
The mainstream the GNSS
practice positioning
usually involves results as observation
employing the IMUvalues for
error correction updates. The mainstream practice usually involves employing the IMU
as the prediction data sensor and other sensors as observation sensors.
as the prediction data sensor and other sensors as observation sensors.
Loosely coupled systems are generally divided into open-loop and closed-loop sys-
Loosely coupled systems are generally divided into open-loop and closed-loop sys-
tems. Due to the large errors in the open-loop system where the attitude of the IMU is not
tems. Due to the large errors in the open-loop system where the attitude of the IMU is not
corrected by thecorrected
Kalman by filter, this experiment adopts a closed-loop system. In the closed-
the Kalman filter, this experiment adopts a closed-loop system. In the closed-
loop system, theloop
attitude of the IMU is of
system, the attitude corrected
the IMUby the Kalman
is corrected filter.
by the The architecture
Kalman of
filter. The architecture of
the loosely coupled closed-loop
the loosely coupled system is illustrated
closed-loop system isinillustrated
Figure 1. in Figure 1.

αtAcc ,ωtGyro

ψ INS
body
,t
α t
Acc
,ω Gyro
t

δX INS ,t , δVINS ,t , δΨINS ,t


local local local

local
rGNSS ,t X GNSS ,t

X tlocal , Vt local , Ψtlocal

Figure
Figure 1. Architecture of1.loosely
Architecture of loosely
coupled coupled
closed-loop closed-loop system.
system.

2.1.2. Tightly Coupled


2.1.2. Tightly Coupled
In contrast,
In contrast, tightly coupledtightly coupled[23]
integration integration
requires[23]
therequires the utilization
utilization of more raw of more
and raw and
comprehensive data for computation, such as pseudorange data from GNSS. In tightly cou-In tightly
comprehensive data for computation, such as pseudorange data from GNSS.
pled integration,coupled integration,
it is common it is common
to compare to compare and
the pseudorange the pseudorange
pseudorangeand ratepseudorange
estimates rate es-
computed from INS outputs with the pseudorange and pseudorange rate measurementsrate meas-
timates computed from INS outputs with the pseudorange and pseudorange
urements from GNSS receivers. The difference obtained serves as the measurement input
from GNSS receivers. The difference obtained serves as the measurement input for the
for the filter. Through combined navigation filtering, error estimates of the INS are gener-
filter. Through combined navigation filtering, error estimates of the INS are generated, and
ated, and these estimates can be used to correct the system through measurement updates,
these estimates can be used to correct the system through measurement updates, thereby
thereby enhancing accuracy.
enhancing accuracy. The tightly coupled structure is relatively more complex but can utilize raw data for
The
Appl. Sci. 2024, 14, x FOR tightly
PEER coupled structure
REVIEW
computation, therebyisenhancing
relativelysystem
more complex
accuracy.but can utilize is
Its architecture raw data for
depicted
5 of 25
in Figure 2.
computation, thereby enhancing system accuracy. Its architecture is depicted in Figure 2.

αtAcc ,ωtGyro

,t −1 , Bω ,t −1 ,ψ KF ,t −1
Bαbody body body
α tAcc , ωtGyro

Wbα
δX local
INS ,t , δVlocal
INS ,t , δΨ
local
INS ,t Wbω

local
rGNSS ,t X GNSS ,t

X tlocal , Vt local , Ψtlocal

Figure 2. Architecture of tightly coupled system.


Figure 2. Architecture of tightly coupled system.

2.2. Mathematical Model of GNSS/INS Fusion Navigation


Constructing the mathematical model of GNSS/INS fusion navigation using the Kal-
man filter tightly coupled algorithm as an example [24].
Appl. Sci. 2024, 14, 5493 5 of 24

2.2. Mathematical Model of GNSS/INS Fusion Navigation


Constructing the mathematical model of GNSS/INS fusion navigation using the
Kalman filter tightly coupled algorithm as an example [24].

2.2.1. System Equations


In the tightly coupled mode, the state variables consist of two parts [25]: One part is
the error state of the INS, including attitude, velocity, position, and sensor biases, totaling
15 dimensions. Its state equations are as follows:

X1 = F1 (t) X1 (t) + Γ(t)W (t) (3)

In the equation provided, this equation


 T
X1 = ϕ E , ϕ N , ϕU , δυE , δυ N , δυU , δL, δλ, δh, ε x , ε y , ε z , ∇ x , ∇y , ∇z

is established based on the strapdown INS error equation. The state transition matrix is
given by  
( FN )9×9 ( FS )9×9
F1 = (4)
0 ( FM )6×6 15×15
In the equation provided, FN is the 9 × 9 error matrix of the INS, corresponding to the
basic error equation of the INS. Within the transition matrix, FS is represented as:

Cbn
 
O3×3
FS = O3×3 Cbn  (5)
O3×3 O3×3
 
1 1 1 1 1 1
FM = Diag − ,− ,− ,− ,− ,− (6)
Trx Try Trz Tax Tay Taz
The other part of the error state comprises errors from the GNSS. In tightly coupled
integration, two time-dependent errors are usually removed: one is the distance error δtru
equivalent to the error in Equation (1), and the other is the distance error δtu equivalent
to the clock frequency error, δtu typically modeled as a first-order Markov process. As
this system involves two satellite navigation systems, the error states of the GNSS are
represented by the distance errors equivalent to the clock errors of the GPS and BeiDou
systems, denoted as δtu1 and δtu2 , respectively, and the distance rate errors equivalent to
the clock frequency errors, denoted as δtru1 and δtru2 , respectively.
Therefore, the error state of GNSS is given by

XG (t) = [δtu1 , δtu2 , δtru1 , δtru2 ] T (7)

The corresponding differential equation is

δtu = δtru + ωt
(8)
δtru = − βδtru + ωru

where β is the correlation time. Expressing the equation in matrix form yields:

XG (t) = FG (t) XG (t) + ΓG (t)WG (t) (9)

Hence,
       
δtu1 1 0 0 0 δtu1 1 0 0 0 ωu1
 δtu2  0 1 0 0   δtu2  0 1 0 0 ωu2 
 
δtru1  = 0
    + (10)
0 − β tru1 0 δtru1  0 0 1 0 ωru1 
 
δtru2 0 0 0 − β tru2 δtru2 0 0 0 1 ωru2
Appl. Sci. 2024, 14, 5493 6 of 24

Combining the error state equations of the INS and the GPS, the state equation of the
pseudorange and pseudorange rate combination system is obtained as follows:

Γ (t)
       
X1 ( t ) F (t) 0 X1 ( t ) 0 W1 (t)
= 1 + 1 (11)
XG (t) 0 FG (t) XG (t) 0 ΓG (t) WG (t)

2.2.2. Observation Equations


(1) Observation Equation of the System Pseudorange Composition
Since the position information output by the strapdown inertial navigation solution is
typically represented in the Earth-centered Earth-fixed (ECEF) coordinate system, which
is the longitude, latitude, and altitude coordinate system, it needs to be converted to the
ECEF coordinate system. The conversion relationship is as follows:

x I = ( Rn + h) cos L cos λ
y I = ( Rn + h) cos L cos λ (12)
z I = Rn 1 − e2 + h sin L
  

 
j j j
Assuming the position of the j-th satellite in the ECEF coordinate system is xs , ys , zs ,
then the pseudorange from the vehicle to the j-th satellite can be obtained using the position
of the vehicle ( x I , y I , z I ) calculated by the inertial navigation solution:
r
j 2 j 2 j 2
     
j
ρI = x I − xs + y I − ys + z I − zs (13)

Expanding Equation (14) in a Taylor series around the true position of the vehicle
in the ECEF coordinate system (x, y, z) and retaining terms up to the first order, it can be
obtained as follows:
 1 j j j
j 2 j 2 j 2 2 ∂ρ ∂ρ ∂ρ
    
j
ρI = xI − xs + y I − ys + z I − zs + I δx + I δy + I δz (14)
∂x ∂y ∂z

Then, it has the following relationship:


j j
∂ρ I x − xs
∂x =   1
= e j1
j 2 j 2 j 2 2
   
x − xs + y−ys + z−zs

j j
∂ρ I y−ys
=  1
= e j2
∂y 
j 2
 
j 2
 
j 2 2
(15)
x − xs + y−ys + z−zs

j j
∂ρ I z−zs
∂z =  2   1
= e j3
j 2 j 2 2
 
j
x − xs + y−ys + z−zs

The expression for the pseudorange measured by the GNSS receiver between the
vehicle and the j-th GNSS satellite is given by
  1
j 2 j 2 j 2 2
   
j j
ρG = x I − xs + y I − ys + z I − zs − δtu − υρ (16)

j
where δtu represents the distance corresponding to the equivalent clock error, and υρ
denotes the pseudorange measurement noise, primarily stemming from effects such as
multipath, tropospheric delay errors, and ionospheric errors.
Appl. Sci. 2024, 14, 5493 7 of 24

Thus, it can be obtained the observation equation for pseudorange error as follows:
j j j
δρ j = ρ I − ρG = e j1 δx + e j2 δy + e j3 δz + δtu + υρ (17)

In general, the observation equation for pseudorange is established by selecting the


best four satellites as observation satellites.
      1 
δρ1 e11 e12 e13 1 δx υρ
δρ2  e21 e22 e23 1 δy   υρ2 
δρ =  =   +  (18)

δρ3  e31 e32 e33 1 δz  υρ3 

δρ4 e41 e42 e43 1 δtu υρ4

Now, differentiating both sides of the coordinate transformation formula yields the
following transformation relationship:

δx = cos L cos λδh − ( Rn + h) sin L cos λδL − ( Rn + h) cos L sin λδλ


δy = cos L sin λδh − ( Rn + h) sin L sin λδL + ( Rn + h) cos L cos λδλ (19)
δz = sin Lδh + Rn 1 − e2 + h δL
  

Hence, the following observation equation can be derived:

Zρ (t) = Hρ (t) X (t) + Vρ (t) (20)

where
. . .
 
Hρ = O4×6 ..Hρ1 ..O4×6 ..Hρ2
   
a11 a12 a13 1 0 (21)
 a21 a22 a23   1 0 
Hρ1 = , H =  
 a31 a32 a33  ρ2  1 0 
a41 a42 a43 1 0
The coefficient calculation formula is as follows:

Zρ (t) = Hρ (t) X (t) + Vρ (t) (22)

(2) Observation Equation for the System Pseudorange Rate Composition


The pseudorange rate between the vehicle output by the INS and the j-th satellite can
be expressed as follows:

• •
 • •
 • •
 • • •
ρ1j = e j1 x1 − xsj + e j2 y1 − ysj + e j3 z1 − zsj + e j1 δ x + e j1 δy + e j1 δz (23)

The pseudorange rate measured by the GNSS receiver is given by



• •
 • •
 • •

ρGj = e j1 x1 − xsj + e j2 y1 − ysj + e j3 z1 − zsj + δtru + υ pj (24)

Therefore, the observation equation for pseudorange rate error can be derived.
• • • • •
ρ1j − ρGj = e j1 δ x + e j2 δy + e j3 δz − δtru − υ pj (25)

In the case of four observed satellites, it has


 •
 
 υ•
−1 δx

e11 e12 e13 ρ1
 •   υ• 
• e21 e22 e23 −1  δ y
   
ρ2 
δρ =  − (26)
e31 e32 e33 −1 δz•

 υρ3
 • 

e41 e42 e43 −1 δt υ•
ru ρ4
Appl. Sci. 2024, 14, 5493 8 of 24

As the velocity information outputted by the INS is represented in the geodetic


coordinate system (i.e., the East–North–Up coordinate system), it is necessary to convert it
through the coordinate transformation matrix from the geodetic coordinate system to the
ECEF coordinate system, as follows:

δ x = −δVε sin λ − δVn sin L cos λ + δVu cos L cos λ

δy = δVε cos λ − δVn sin L sin λ + δVu cos L sin λ (27)

δz = δVn cos L + δVn sin L

Through the above analysis, the observation equation for the pseudorange rate is
derived as follows:
Z • (t) = H • (t) X (t) + V• (t) (28)
ρ ρ ρ

where
. . .
 
H • = O4×3 ..H • ..O4×9 ..H •
ρ ρ1 ρ2
   
b11 b12 b13 0 1 (29)
 b21 b22 b23   0 1 
H• =
 b31
, H • =  
ρ1 b32 b33  ρ2  0 1 
b41 b42 b43 0 1
The coefficient calculation formula is as follows:

b j1 = −e j1 sin λ + e j2 cos λ
b j2 = −e j1 sin L cos λ − e j2 sin L sin λ + e j3 cos L (30)
b j3 = e j1 cos L cos λ + e j2 cos L sin λ + e j3 sin L

The observation equation for the combined pseudorange and pseudorange rate system
is as follows: " # " #
Hρ (t) Vρ (t)
Z (t) = H • (t) X (t) + V• (t) (31)
ρ ρ

3. Constructing the Framework for CNN and LSTM Assisted GNSS/INS


Navigation System
During the study of the Kalman filter algorithm, when severe interference exists in
the navigation environment, regardless of whether the loosely coupled or tightly coupled
approach is employed, it can lead to significant errors in the final fusion navigation position-
ing. Therefore, it is necessary to utilize neural network architectures to assist in correcting
the signals before fusion.
The algorithm of the Kalman filter assisted by neural networks [26] demonstrates
outstanding performance in various application scenarios. The main advantages are
as follows:
1. Complementarity: Kalman filters are primarily suitable for linear systems and envi-
ronments with Gaussian noise, while neural networks excel in handling nonlinear,
non-Gaussian, or complex systems. Therefore, the combination of the two can fully uti-
lize their respective strengths, complementing each other’s shortcomings, and thereby
more accurately describing and predicting the dynamic behavior of the system.
2. Improved prediction accuracy: Through the learning and modeling capabilities of
neural networks, nonlinear characteristics of the system can be captured, thereby
providing more accurate models and predictions for the Kalman filter. This helps
reduce errors during the filtering process and improves prediction accuracy.
3. Strong adaptability: Due to the powerful learning and adaptation capabilities of
neural networks, they can adapt to changes and uncertainties in system parameters.
Appl. Sci. 2024, 14, 5493 9 of 24

Therefore, even if the dynamic characteristics of the system change, neural-network-


assisted Kalman filters can quickly adjust and adapt to the new environment.
4. Enhanced robustness: When facing noise interference, missing data, or outliers, the
combination of neural networks and Kalman filters can enhance the robustness of the
system. Neural networks can learn and handle these exceptional situations, while
Kalman filters can smooth and correct estimation results to some extent.
5. Expanded application scope: By integrating neural networks and Kalman filters, the
application scope of Kalman filters can be expanded to more complex systems and
scenarios. For example, in fields such as autonomous driving, robot navigation, and
financial forecasting, this fusion method can help achieve more accurate and reliable
state estimation and prediction.
Therefore, researching techniques that can improve the performance of navigation
systems by constructing neural network architectures to correct signal errors has become
one of the current focuses of research work.

3.1. Neural Network Structure and Function


With the advancement of Artificial Intelligence (AI) and deep learning, the application
of neural networks in fusion navigation primarily focuses on the integration and processing
of multisource navigation information. By constructing appropriate neural network models,
data from different navigation systems can be effectively integrated to enhance navigation
accuracy and stability.
Specifically, there are various ways in which neural networks are applied in fusion
navigation [27]. One common approach is to leverage the self-learning and feature ex-
traction capabilities of neural networks to process data from different navigation sensors,
extract useful feature information, and fuse them. This enables the full utilization of the
advantages of each sensor, compensating for the limitations of individual sensors, and
improving the overall performance of the navigation system. Another approach is to utilize
the predictive capabilities of neural networks to forecast the future state of the navigation
system. By learning and analyzing historical data, neural networks can capture the motion
patterns and trends of the navigation system, thereby accurately predicting future states.
This facilitates early detection of navigation errors and corrections, thereby enhancing navi-
gation accuracy and reliability. Additionally, neural networks can be combined with other
algorithms and technologies to form more robust fusion navigation solutions. For example,
neural networks can be combined with Kalman filter algorithms to filter navigation data,
further reducing the impact of noise and errors. Alternatively, neural networks can be
combined with map-matching techniques to constrain and optimize navigation results
using map information.
The neural network is a mathematical model algorithm that mimics the behavior
characteristics of animal neural networks. It achieves information processing by adjusting
the relationships between a large number of interconnected nodes internally. It possesses
adaptive self-learning capabilities, automatically grasps environmental features, achieves
automatic target recognition, and exhibits advantages such as good fault tolerance and
strong anti-interference ability.
Structurally, neural networks are composed of a large number of neurons intercon-
nected with each other. These neurons receive input signals, process them through activa-
tion functions, and then output signals to the next layer. Different neural network models
have different structures, such as perceptrons, feedforward networks, residual networks,
and RNNs, among others.
Specifically, the perceptron [28] is the most basic of all neural networks and serves as
the fundamental component of more complex neural networks. It connects only one input
neuron and one output neuron. Feedforward networks [29] are collections of perceptrons,
consisting of input layers, hidden layers, and output layers, with signals propagating
unidirectionally between these layers. Residual networks [30] achieve signal propagation
across layers by skipping connections, thereby reducing the problem of gradient vanishing.
Appl. Sci. 2024, 14, 5493 10 of 24

RNNs [31] contain loops and self-repetition, enabling them to handle data with temporal
dependencies.

3.1.1. Artificial Neurons


Appl. Sci. 2024, 14, x FOR PEER REVIEW 11 of 25
Artificial neurons [32] generate an output value representing their activity by applying
a nonlinear activation function. In this process, this study assumes that the neuron receives
n input signals X = ( X1 , X2 , · · · , Xn ). These input signals are weighted and summed up,
summed up,
represented byrepresented by az. state
a state variable variable
Finally, z. Finally,
the output thethis
value of output value
neuron, of this
namely its neuron,
activity
a, is calculated based on the state z through an activation function. Figure 3 illustrates Fig-
namely its activity a, is calculated based on the state z through an activation function. the
ure 3 illustrates
model the model
of an artificial neuron.of an artificial neuron.

Figure 3.
Figure 3. Model
Model of
of an
an artificial
artificial neuron.
neuron.

The most
The most commonly
commonly usedused activation
activation function
function in in traditional
traditional neural
neural networks
networks is is the
the
sigmoid function. The
sigmoid function. Thesigmoid
sigmoid function
function refers
refers to atoclass
a class of S-shaped
of S-shaped curvecurve functions,
functions, with
with commonly
commonly used sigmoid
used sigmoid functions
functions including
including the logistic
the logistic σ ( x ) and
functionfunction andtanh
σ ( x )the the
tanh function.
function. 1
σ( x) = −x
(32)
1 + e1
σ (x ) = −x
(32)
e1x +
−ee− x
tanh( x ) = x (33)
e + e− x
e x − e− x
3.1.2. Multilayer Feedforward Neuraltanh (x ) =
Network (33)
e x + e− x
The multilayer feedforward neural network [33], also known as the multilayer percep-
tron (MLP), introduces hidden layers between the input and output layers to enhance the
3.1.2. Multilayer
performance Feedforward
of single-layer Neural Network
perceptrons. The number of these hidden layers can be one
or more,
The and they function
multilayer as theneural
feedforward “internal representation”
network of input
[33], also known as patterns. With per-
the multilayer this
improvement,
ceptron (MLP),the original single-layer
introduces hidden layers perceptrons
between the transform intooutput
input and multilayer
layersperceptrons,
to enhance
thereby enhancing
the performance oftheir ability to
single-layer handle complex
perceptrons. patterns.
The number The training
of these hidden of multilayer
layers can be
feedforward
one or more, and they function as the “internal representation” of input patterns.hence
neural networks often utilizes the error backpropagation algorithm, With
they are also commonly
this improvement, referredsingle-layer
the original to as Back Propagation (BP) networks
perceptrons transform [34].
into multilayer percep-
trons,The structure
thereby of multilayer
enhancing feedforward
their ability to handle neural networks
complex is hierarchically
patterns. The training rich, con-
of multi-
sisting of an input neural
layer feedforward layer, several
networks hidden
oftenlayers,
utilizesand
theanerror
output layer. Each layer
backpropagation can be
algorithm,
viewed as anare
hence they independent
also commonly single-layer
referredfeedforward neural network,
to as Back Propagation (BP)with each one
networks [34].linearly
classifying input patterns. However, it is the combination and superposition
The structure of multilayer feedforward neural networks is hierarchically rich, of these layers
con-
that enables multilayer feedforward neural networks to perform more
sisting of an input layer, several hidden layers, and an output layer. Each layer can becomplex and refined
classification
viewed as antasks on input single-layer
independent patterns. feedforward neural network, with each one line-
Multilayer feedforward
arly classifying input patterns. neural networks
However, arecombination
it is the renowned for their
and excellent nonlinear
superposition of these
processing capabilities. Despite their relatively simple structure, they
layers that enables multilayer feedforward neural networks to perform more complex and have an extremely
wide
refinedrange of applications.
classification tasks onThese
input networks
patterns. can approximate any continuous function
and square-integrable function with
Multilayer feedforward neural networks arbitraryare precision.
renowned Moreover,
for theirthey can accurately
excellent nonlinear
represent any finite training sample set, making them of significant value in various fields.
processing capabilities. Despite their relatively simple structure, they have an extremely
Figure 4 illustrates the model of a multilayer feedforward neural network.
wide range of applications. These networks can approximate any continuous function and
square-integrable function with arbitrary precision. Moreover, they can accurately repre-
sent any finite training sample set, making them of significant value in various fields. Fig-
ure 4 illustrates the model of a multilayer feedforward neural network.
Appl. Sci. 2024, 14, x FOR PEER REVIEW 12 of 25
Appl. Sci. 2024, 14, 5493 11 of 24

Input layers Hidden layers Output layers

Figure 4. 4.
Figure Model of of
Model a multilayer feedforward
a multilayer neural
feedforward network.
neural network.

3.1.3.
3.1.3. Convolutional
Convolutional Neural
Neural Network
Network
The
The CNNCNN [35]
[35] is is
a atype
typeofofneural
neuralnetwork
network specifically
specifically designed
designed toto handle
handle data
data with
with
grid-like structures. This network architecture excels in image
grid-like structures. This network architecture excels in image processing tasks, capable processing tasks, capable
ofofidentifying
identifying two-dimensional
two-dimensional patterns
patterns with
with shift,
shift, scale,
scale, andand other
other formsofofdistortion
forms distortion
invariance.The
invariance. Thebasic
basicstructure
structure of of aa CNN
CNN includes
includesconvolutional
convolutionallayers,layers,pooling
poolinglayers, and
layers,
fully connected layers.
and fully connected layers.
Convolutional
Convolutional Layer
Layer [36]:
[36]: TheThe convolutional
convolutional layer
layer is the
is the corecore
of a of
CNN,a CNN, consisting
consisting of
of multiple convolutional kernels (or filters). These kernels
multiple convolutional kernels (or filters). These kernels slide over the input data andslide over the input data and
per-
perform
form convolution
convolution operations
operations to generate
to generate feature
feature maps.
maps. EachEach convolutional
convolutional kernel
kernel can
can
learn specific features from the
learn specific features from the input data. input data.
Pooling Layer [37]: The pooling layer typically follows the convolutional layer and
Pooling Layer [37]: The pooling layer typically follows the convolutional layer and is
is used to reduce the spatial size of the data (i.e., downsampling), decrease the number
used to reduce the spatial size of the data (i.e., downsampling), decrease the number of
of parameters in the network to prevent overfitting, and enhance the model’s robustness.
parameters in the network to prevent overfitting, and enhance the model’s robustness.
Common pooling operations include max pooling and average pooling.
Common pooling operations include max pooling and average pooling.
Fully Connected Layer [38]: The fully connected layer is usually located in the last
Fully Connected Layer [38]: The fully connected layer is usually located in the last
few layers of the CNN, responsible for receiving the features extracted from the preceding
few layers of the CNN, responsible for receiving the features extracted from the preceding
layers and outputting the final prediction results. In classification tasks, a softmax layer is
layers and outputting the final prediction results. In classification tasks, a softmax layer is
often appended after the fully connected layer to convert the outputs into probability dis-
often appended after the fully connected layer to convert the outputs into probability dis-
tributions. Throughout the training process, the CNN continuously updates the weights of
tributions. Throughout the training process, the CNN continuously updates the weights
the convolutional kernels and fully connected layers using the backpropagation algorithm
ofto
the convolutional kernels and fully connected layers using the backpropagation algo-
minimize the error between the predicted values and the actual values.
rithm to minimize the error between the predicted values and the actual values.
3.1.4. Recurrent Neural Network
3.1.4. Recurrent Neural Network
The RNN [39] is a type of neural network specialized in handling sequential data. Its
coreThe RNN
idea is to[39]
useisthe a type of neural
previous output network
as partspecialized
of the current in handling sequential
input to capture data. Its
dependencies
core idea is to use the previous output as part of the current input
in time series. The basic structure of an RNN includes a hidden layer and an output to capture dependencies
inlayer.
time series.
In RNNs,The basic
eachstructure
unit in the of an RNN layer
hidden includes a hidden
receives layer andsum
a weighted an output
of the layer.
output
Infrom
RNNs, theeach unit time
previous in thestephidden
and thelayer receives
input at theacurrent
weighted time sum of undergoes
step, the outputafrom the
nonlinear
previous time stepthrough
transformation and thean input at the current
activation function, time
andstep, undergoes
passes the result a nonlinear
to the nexttransfor-
time step.
mation throughenables
This structure an activation
RNNs to function, and passes thesequence
handle variable-length result to data
the next time step.
and perform This
recursive
structure
operations enables
alongRNNs to handle
the sequence variable-length
evolution direction.sequence data and perform recursive
operations
RNNs along
have thewide-ranging
sequence evolution direction.
applications in various fields such as text classification,
RNNstranslation,
machine have wide-ranging applications
speech recognition, in various fields
image/video such as
captioning, timetextseries
classification,
prediction,
and recommendation
machine translation, speech systems. In these
recognition, applications,
image/video RNNs can
captioning, time model
seriesdependencies
prediction,
between
and sentences and
recommendation phrases
systems. Inand
thesetemporal dependencies
applications, RNNs can between
modelspeech and language,
dependencies be-
as well
tween as spatial
sentences and relationships
phrases andbetween temporal frames.
dependencies between speech and language,
as well as spatial relationships between frames.
Appl. Sci. 2024, 14, 5493 12 of 24

Despite its strong performance in certain domains, RNNs also have some notable
drawbacks. During training, RNNs often encounter the problems of exploding or vanishing
gradients, leading to unstable training or difficulties in convergence. Additionally, com-
pared with other types of neural networks, RNNs typically require more memory space,
limiting their application in large-scale datasets. Moreover, when using certain activation
functions, RNNs may struggle to effectively handle excessively long sequences, which can
adversely affect their performance.
To address these issues, researchers actively explore and propose various improvement
strategies. Among them, LSTM networks [40] undoubtedly stand out as the most promi-
nent and representative solution. LSTM introduces gate mechanisms and memory units,
enabling more effective processing of long sequence data and alleviating the problems of
vanishing and exploding gradients. This has led to significant performance improvements
for LSTM in many sequence processing tasks.

3.2. Constructing the Framework for CNN- and LSTM-Assisted GNSS/INS Navigation System
Because the fusion navigation data is a series of time series, there is this obvious
spatiotemporal correlation between them. Therefore, a CNN and LSTM are combined to
extract more high-dimensional features from the long-term dataset using the CNN and are
combined with LSTM to synthesize the series of high-dimensional features for time series
prediction.
The combination of a CNN and LSTM [41–43] forms a powerful deep learning archi-
tecture, leveraging the CNN’s feature extraction capabilities and LSTM’s ability to handle
sequence data and capture long-term dependencies. The key steps in constructing a CNN-
LSTM-based network model are feature extraction in the CNN layer and model training in
the LSTM layer.
Advantages of this combination include the following:
• Powerful Feature Extraction: The CNN automatically extracts useful features from
raw data, reducing the need for manual feature engineering.
• Handling Sequence Data: LSTM can process variable-length sequence data and capture
long-term dependencies, which is crucial for many practical applications.
Flexibility: The combination of a CNN and LSTM can be adjusted and optimized
based on the specific requirements of the task, such as adjusting the number of CNN layers,
convolutional kernel sizes, and LSTM hidden units.
Based on the characteristics of GNSS/INS navigation systems data, the specific steps
to construct the corresponding neural network architecture are as follows:

3.2.1. CNN Feature Extraction


The CNN structure consists of an input layer, convolutional layers, pooling layers, a
fully connected layer, and an output layer. Among them,
Input Layer: Takes preprocessed GNSS and INS data as input.
Convolutional Layer: Utilizes multiple convolutional kernels to perform convolution
operations on input data, extracting local features.
Pooling Layer: Conducts pooling operations on the feature maps output from the
convolutional layer, further compressing features and reducing computational complexity.
Fully Connected Layer: Flattens the output of the pooling layer and integrates and
transforms features through fully connected layers.
The mathematical model expression for the CNN is
 
yin = f x n−1 ∗ Cin + din (34)

where yin is the output of the ith convolution in the nth convolutional layer; x n−1 is the
input of the nth convolutional layer; ∗ is the convolution operation; Cin is the weight of the
ith convolutional kernel in the nth convolutional layer; and din is the bias parameter of the
ith convolutional layer in the nth convolutional layer.
Appl. Sci. 2024, 14, x FOR PEER REVIEW 14 of 25

Appl. Sci. 2024, 14, 5493 13 of 24


of the ith convolutional kernel in the nth convolutional layer; and d in is the bias param-
eter of the ith convolutional layer in the nth convolutional layer.
In order to increase
In order the diversity
to increase of the
the diversity oflearned features,
the learned CNN
features, layerlayer
CNN feature extraction
feature extraction
is performed for 4 convolutional computation layers, where the 1st convolutional
is performed for 4 convolutional computation layers, where the 1st convolutional layer islayer
64 layers, the 2ndthe
is 64 layers, convolutional layer islayer
2nd convolutional 126 layers,
is 126 the 3rd convolutional
layers, layer is 256
the 3rd convolutional layerlay-
is 256
ers, layers,
and theand4ththeconvolutional layer is 256 layers. After the convolutional computation
4th convolutional layer is 256 layers. After the convolutional computation
layer, the output
layer, of the
the output of convolutional layerlayer
the convolutional is made nonlinearly
is made mapped
nonlinearly by the
mapped by activation
the activation
function. Here,
function. the ReLU
Here, function
the ReLU is used.
function The The
is used. expression is is
expression
f (xf ()x=) max(0,(0,x)x)
= max (35) (35)

It is characterized by fast convergence and simplicity in finding the gradient. Its func-
It is characterized by fast convergence and simplicity in finding the gradient. Its
tion image is shown in Figure 5.
function image is shown in Figure 5.

ReLU Functions

Figure 5. ReLU function.


Figure 5. ReLU function.
The pooling layer is sandwiched between successive convolutional layers and is used
The
to poolingthe
compress layer is sandwiched
amount between
of data and successive
parameters convolutional
and reduce layers
overfitting. The and
fullyisconnected
used
to compress
layer is at the tail of the convolutional neural network and is connected in the samecon-
the amount of data and parameters and reduce overfitting. The fully way as
nected layer is at
traditional the tail
neural of the convolutional
network neurons. neural network and is connected in the same
way as traditional neural network neurons.
3.2.2. LSTM Model Training
3.2.2. LSTMTheModel Training
GNSS/INS data processed in the CNN layer are fed into the LSTM layer for
The GNSS/INS
further data processed
feature learning in the CNNdata,
of the GNSS/INS layerwhile
are fedtheinto the LSTMsequence
GNSS/INS layer for data
fur- are
therprocessed and memorized
feature learning using gating
of the GNSS/INS data, and
while long-
the and short-term
GNSS/INS memory
sequence datamodules,
are pro- and
model
cessed andtraining
memorized and prediction
using gating is performed.
and long- and short-term memory modules, and
The main
model training andmechanisms
prediction isfor capturing long-term dependencies in LSTM networks are
performed.
as
Thefollows:
main mechanisms for capturing long-term dependencies in LSTM networks are
as follows: Forget Gate [44]: The forget gate determines which information should be retained in
the memory
Forget Gate cell.
[44]:ItThe
takes the input
forget at the current
gate determines timeinformation
which step and theshould
hiddenbestate from the
retained
previous
in the memory time step
cell. as inputs
It takes the and
input outputs
at the acurrent
value between
time step0 and 1, thecontrolling the degree
hidden state from of
information
the previous time retention
step as in the memory
inputs and outputs cell. When
a valuethe output 0ofand
between the 1,forget gate is close
controlling the to
degree of information retention in the memory cell. When the output of the forget gate is the
1, it indicates the retention of most information, while an output close to 0 indicates
closeforgetting of most the
to 1, it indicates information.
retention of most information, while an output close to 0 indi-
Input Gate
cates the forgetting of most [45]: Theinformation.
input gate decides which new information should be added to the
memory cell. Similarly,
Input Gate [45]: The input it takes the
gate input at
decides the current
which time step and
new information the hidden
should be addedstatetofrom
the previous
the memory cell. time step asit inputs
Similarly, andinput
takes the outputs
at thetwo values:time
current onestep
for controlling
and the hiddenthe amount
state of
new information to be added and another for generating
from the previous time step as inputs and outputs two values: one for controlling the the new candidate cell state.
amount of Cell State
new [46]: The cell
information state
to be is theand
added coreanother
component of the LSTM
for generating thenetwork, responsible
new candidate
for storing
cell state. and transmitting information in long time sequences. At each time step, the cell
state is updated based on the outputs of the forget gate and input
Cell State [46]: The cell state is the core component of the LSTM network, responsible gate. Specifically, the cell
for storing and transmitting information in long time sequences. At each time step, the cell and
state is determined by the previous time step’s cell state, the output of the forget gate,
the output of the input gate.
Output Gate [47]: The output gate determines the output value at the current time
step. It takes the input at the current time step, the hidden state from the previous time
state is updated based on the outputs of the forget gate and input gate. Specifically, the
cell state is determined by the previous time step’s cell state, the output of the forget gate
and the output of the input gate.
Appl. Sci. 2024, 14, 5493
Output Gate [47]: The output gate determines the output value at the14current of 24
time
step. It takes the input at the current time step, the hidden state from the previous time
step, and the current cell state as inputs, and outputs a value used to compute the curren
time and
step, step’s
thehidden
current state. The
cell state ashidden stateoutputs
inputs, and is the output
a value of
usedthetoLSTM network
compute at each time
the current
step, which can be used for subsequent layer processing or as a representation
time step’s hidden state. The hidden state is the output of the LSTM network at each of the entire
sequence.
time step, which can be used for subsequent layer processing or as a representation of the
entireThrough
sequence.these mechanisms, LSTM networks can selectively retain and update infor
Through these mechanisms,
mation, effectively LSTM networks
handling dependencies in can
longselectively retain and
time sequences. update
This infor- makes
capability
mation, effectively handling dependencies in long time sequences. This capability
LSTM networks perform excellently in tasks involving long-term dependencies, such as makes
LSTM networks perform excellently in tasks involving long-term dependencies, such as
speech recognition, machine translation, and time series prediction.
speech recognition, machine translation, and time series prediction.
Figure 6 depicts the structure of an LSTM neural network.
Figure 6 depicts the structure of an LSTM neural network.

hn

Cn−1 Cn
tanh
fn in

gn on

σ σ tanh σ
hn−1 hn

xn

Figure6.6.LSTM
Figure LSTMneural
neural network
network structure.
structure.

The
Theupdate
updateprocess of LSTM
process of LSTM at time
at time step step
t is ast follows:
is as follows:

(
t = σ Wi xt + U i ht −1 + Vi ct −1
it = σi(W )


 i xt + Ui ht−1 + Vi ct−1 )

σf W= σ ( )


+ +
 
W x U f t −1 V f ct −1
h


f = x + U h + V c
 t



 t f t f
f t t − 1 f t − 1

 o = ( )

o xσ

t σo(tW= WUooxhtt−+1U + oVhotc−t1−+1 )Vo ct −1


t+
 ~
(36) (36



 ct =(W
cet =tanh tanh
c xt +W (
Uccxhtt−+1 )U c ht −1 )
ct =fct t⊗=ct−f t1 ⊗
~

+ ict t⊗ ce+t it ⊗ ct



 −1


( )

ht =

oht t⊗=tanh
ot ⊗ (cttanh ct


)

wherext xist the


where input
is the at the
input at current time step,
the current timeσstep, σ represents
represents the logisticthe
sigmoid function,
logistic sigmoid func
and Vi , Vf , Vo denote element-wise multiplication. The forget gate f t controls how much
tion, and Veach
information f , Vo denote
i , V memory element-wise
cell should forget, themultiplication. The forget
input gate it controls ft controls
gate new
how much
information should be added to each memory cell, and the output gate ot controls how
how much information each memory cell should forget, the input gate it controls how
much information each memory cell should output.
much new information should be added to each memory cell, and the output gate ot
3.2.3. Constructing the CNN-LSTM Network Model
controls how much information each memory cell should output.
Based on the above steps, the CNN-LSTM network model is constructed. The specific
architecture is illustrated in Figure 7.
3.2.3. Constructing the CNN-LSTM Network Model
Based on the above steps, the CNN-LSTM network model is constructed. The specific
architecture is illustrated in Figure 7.
Appl. Sci. 2024, 14, 5493
x FOR PEER REVIEW 1615of
of 25
24

Figure 7. Neural
Figure 7. Neural network
network framework
framework combining CNN and
combining CNN and LSTM.
LSTM.

4. Experimental Comparison
Objective: The
Objective: The experiment
experiment aimsaims to to validate
validate the optimization
optimization effect
effect of integrating
CNN with LSTM to assist assist Kalman
Kalman filtering
filtering for
for fused
fused navigation.
navigation. Therefore, during the
experimental process, scenarios are set up for for both
both undisturbed
undisturbed and and disturbed
disturbed conditions.
conditions.
the undisturbed
In the undisturbedscenario,
scenario,thethe effectiveness
effectiveness of loosely
of loosely coupled
coupled andand tightly
tightly coupled
coupled Kal-
man filtering methods is primarily compared. In the disturbed scenario, the focus is on
Kalman filtering methods is primarily compared. In the disturbed scenario, the focus is
comparing the
comparing the errors
errorsbetween
betweenusing
usingaaconventional
conventionalCNN CNNand anda neural
a neuralnetwork
network combining
combin-
a CNN and LSTM, as well as the differences in navigation effectiveness
ing a CNN and LSTM, as well as the differences in navigation effectiveness before before and after
and
usingusing
after neural network
neural assistance.
network assistance.
Experimental Equipment:
Experimental Equipment:Ublox Ublox M8N M8Nfor receiving GNSSGNSS
for receiving signals.signals.
WHEELTEC N100N
WHEELTEC
as the inertial navigation module. UM980 as RTK ground
N100N as the inertial navigation module. UM980 as RTK ground truth. truth.
Data Collection:
Data Base station
Collection: Base station coordinates
coordinates (x, (x, y,
y, z).
z). Measurement
Measurement distance.
distance. Signal
Signal
strength of the first path for transmitted signals. Signal strength of the firstfirst
strength of the first path for transmitted signals. Signal strength of the pathpath
for for
re-
received
ceived signals.
signals.
4.1. Undisturbed Scenario
4.1. Undisturbed Scenario
4.1.1. Experimental Results of Loosely Coupled and Tightly Coupled Systems
4.1.1. Experimental Results of Loosely Coupled and Tightly Coupled Systems
Experimental results of the loosely coupled closed-loop system and tightly coupled
systemExperimental results
were obtained underof the loosely
the same coupled closed-loop
experimental system
conditions, and in
as shown tightly coupled
Figure 8.
system were obtained under the same experimental conditions, as shown in Figure 8.
Appl.
Appl. Sci. 2024, 14,
Sci. 2024, 14, 5493
x FOR PEER REVIEW 16 of
17 of 24
25

LC-KF (close-loop) TC-KF


3.5 3.5

3 3

y m)
2.5 2.5

2 2
y (m)
1.5 1.5

1 1

0.5 0.5

0 0
0.5 1 1.5 2 x 2.5 3 3.5 4 4.5 0.5 1 1.5 2 x 2.5 3 3.5 4 4.5
(m) (m)

INS+GNSS Integration-epoch GroundTruth

Figure 8.
Figure 8. Experimental
Experimentalresults
resultsofofthe
theloosely
looselycoupled
coupled closed-loop
closed-loop system
system andand tightly
tightly coupled
coupled sys-
system.
tem.
4.1.2. Comparative Analysis of Experiments
4.1.2.The
Comparative
experimentalAnalysis
resultsofare
Experiments
analyzed for 2D and Z-direction errors, and correlation
errorThe
analysis plots andresults
experimental CDF plots are constructed.
are analyzed AsZ-direction
for 2D and shown in Figure
errors,9.and
Their specific
correlation
data
errorare shownplots
analysis in Tables 1 andplots
and CDF 2. are constructed. As shown in Figure 9. Their specific
data are shown in Tables 1 and 2.
Table 1. Table of 2D and Z-direction errors for loosely coupled closed-loop systems.

Average Value Variance Standard Deviation


Statistics of 2D error 0.138227 0.004441 0.066644
The statistics of the
0.017386 0.018335 0.135407
Z-direction error

Table 2. Table of 2D and Z-direction errors for tightly coupled systems.

Average Value Variance Standard Deviation


Statistics of 2D error 0.133644 0.004515 0.067196
The statistics of the
0.010795 0.010478 0.102362
Z-direction error

Tightly coupled Kalman filtering improves localization performance by (0.133644 −


0.138227)/0.138227 = 0.0332 compared with loosely coupled Kalman filtering.
Based on the experimental results, it can be observed that the tightly coupled system
exhibits a noticeable improvement in fusion navigation effectiveness after utilizing raw
observation information. Combining the network architectures of loosely coupled and
tightly coupled systems, their advantages and disadvantages can be summarized as follows:
The main advantage of the loosely coupled approach lies in its simple and easy-to-
implement structure. In a loosely coupled system, each sensor (such as IMU and GNSS)
works independently and outputs navigation information, which is then used to correct
errors in the Kalman filter by taking the difference of this information as input. This
approach preserves the original GNSS results without requiring modifications to GNSS
hardware. However, the drawback of the loosely coupled approach is that when the
number of visible satellites in the environment is small, GNSS navigation information
cannot be obtained, leading to a decrease in the performance of the Kalman filter and thus
affecting the positioning accuracy of the entire system.
In contrast, the tightly coupled approach is relatively more complex than the loosely
coupled approach. It requires the processing of raw GNSS data (such as pseudorange
and pseudorange rate) and comparing them with the corresponding data outputted by
INS, with the difference being used as input for the Kalman filter. The main advantage
of this approach is that it addresses the problems associated with observation input in
0 0
0.5 1 1.5 2 x 2.5 3 3.5 4 4.5 0.5 1 1.5 2 x 2.5 3 3.5 4 4.5
(m) (m)

INS+GNSS Integration-epoch GroundTruth

Figure 8. Experimental results of the loosely coupled closed-loop system and tightly coupled sys-
Appl. Sci. 2024, 14, 5493 tem. 17 of 24

4.1.2. Comparative Analysis of Experiments


a combined system with
The experimental only are
results oneanalyzed
Kalman filter.
for 2DHowever, the drawback
and Z-direction of the
errors, and tightly
correlation
coupled approach
error analysis lies
plots in its
and CDF relatively
plots arecomplex implementation
constructed. As shown instructure
Figure and inability
9. Their to
specific
independently
data are shownoutput GNSS
in Tables navigation
1 and 2. results.

Appl. Sci. 2024, 14, x FOR PEER REVIEW 18 of 25

Figure9.
Figure 9. Two-dimensional
Two-dimensionaland andZ-direction
Z-directionerror
erroranalysis plots
analysis forfor
plots loosely coupled
loosely closed-loop
coupled sys-
closed-loop
tem vs. tightly coupled system.
system vs. tightly coupled system.

Table
4.2. 1. Table of
Disturbed 2D and Z-direction errors for loosely coupled closed-loop systems.
Scenario
In the presence of interference, the process
Average Valueof correcting
Variancethe Kalman filter with
Standard a neural
Deviation
network can beof
Statistics divided into the following
2D error 0.138227steps: 0.004441 0.066644
The statistics of the Z-direc-
0.017386 0.018335 0.135407
tion error

Table 2. Table of 2D and Z-direction errors for tightly coupled systems.


bined system with only one Kalman filter. However, the drawback of the tightly coupled
approach lies in its relatively complex implementation structure and inability to inde-
pendently output GNSS navigation results.

4.2. Disturbed Scenario


Appl. Sci. 2024, 14, 5493 18 of 24
In the presence of interference, the process of correcting the Kalman filter with a neu-
ral network can be divided into the following steps:
Data Collection and Processing: Initially, data required for training the neural net-
Data Collection and Processing: Initially, data required for training the neural network
work need to be collected. This includes measurement data from INS, observation data
need to be collected. This includes measurement data from INS, observation data from
from GNSS, etc. These data undergo appropriate preprocessing, such as noise removal
GNSS, etc. These data undergo appropriate preprocessing, such as noise removal and
and normalization, to facilitate the training of the neural network.
normalization, to facilitate the training of the neural network.
Definition of Neural Network Structure: Based on specific application scenarios and
Definition of Neural Network Structure: Based on specific application scenarios and
requirements, an appropriate neural network structure is defined. Here, the networks
requirements, an appropriate neural network structure is defined. Here, the networks
used for
used for comparison
comparison are are aa conventional
conventional CNNCNN and and aa combination
combination of of aa CNN
CNN andand LSTM.
LSTM.
The input
The input toto the
the network
network comprises
comprises observation
observation data
data and
and state
state data,
data, while
while the
the output
output is is
measurement
measurement error. error.
Training of
Training of the
the Neural
Neural Network:
Network: The The collected
collected data
data are
are utilized
utilized to to train
train the
the neural
neural
network. The training objective is to enable the neural network to learn
network. The training objective is to enable the neural network to learn the mapping the mapping rela-
tionship between the Kalman filter parameters and the INS state estimation
relationship between the Kalman filter parameters and the INS state estimation and actual and actual
GNSS observation
GNSS observation data.data.During
Duringtraining,
training,thethe
network’s
network’sperformance
performance is optimized
is optimizedby ad-by
justing parameters such as weights and
adjusting parameters such as weights and biases. biases.
Neural Network
Neural Network Correction:
Correction: InIn the
the Kalman
Kalman filter
filter process,
process, the
the trained
trained neural
neural network
network
is employed to adjust the covariance matrix and other parameters of the
is employed to adjust the covariance matrix and other parameters of the Kalman filter. Kalman filter.

4.2.1. Conventional Convolutional Neural Network


4.2.1. Network Architecture
Architecture
The
The navigation
navigationdataset
datasetserves asas
serves input information,
input andand
information, the training process
the training is depicted
process is de-
in Figure 10.
picted in Figure 10.

Figure 10.
Figure 10. CNN
CNN training
training process.
process.
Appl. Sci. 2024, 14, x FOR PEER REVIEW 20 of 2

The measurement
The measurement error
error obtained
obtained through
through training
trainingisisshown
shownin
inFigure
Figure11.
11.

Figure11.
Figure 11.Training
Training process
process LOSS
LOSS RMSE
RMSE errorerror graph.
graph.

The final training outcome indicates a loss error of RMSE_error = 1.2106.

4.2.2. Neural Network Architecture Combining CNN and LSTM


Appl. Sci. 2024, 14, 5493 19 of 24
Figure 11. Training process LOSS RMSE error graph.

FigureThe final training


11.final
Training process outcome
LOSS RMSEindicates a loss error of RMSE_error =
erroragraph. 1.2106.
The training outcome indicates loss error of RMSE_error = 1.2106.

4.2.2.The
4.2.2. Neural
Neural Network
finalNetwork
training Architecture
outcome Combining
indicates
Architecture error CNN
a loss CNN
Combining of
and and LSTM
RMSE_error
LSTM = 1.2106.
Thenavigation
The navigation dataset
dataset serves
serves as information,
as input input information, and the
and the training training
process process is de
is depicted
4.2.2.
in Neural
Figure 12. Network Architecture Combining CNN and LSTM
picted in Figure 12.
The navigation dataset serves as input information, and the training process is de-
picted in Figure 12.

Figure 12.Training
Training process of the neural network architecture combining CNN and LSTM.
Figure 12. Training process
Figure 12. process of
of the
the neural
neural network
network architecture
architecturecombining
combiningCNN
CNNand
andLSTM.
LSTM.

The
Themeasurement
measurementerror obtained
error through
obtained training
through is shown
training in Figure
is shown in 13.
Figure 13.
The measurement error obtained through training is shown in Figure 13.

Figure 13.
Figure 13. Training
Training process
process LOSS
LOSS RMSE
RMSE error
error graph.
graph.
Figure 13. Training process LOSS RMSE error graph.
The final training outcome indicates a loss error of RMSE_error = 0.7395.

4.2.3. Comparison before and after Signal Correction


It is evident that the neural network combining a CNN and LSTM outperforms the
general CNN in terms of training effectiveness. Therefore, the noise predicted by the
neural network combining a CNN and LSTM is incorporated into the covariance matrix of
the tightly coupled Kalman filter, and the navigation performance is compared with the
navigation performance before signal correction.
In the case of strong signal interference, a comparison of the effect before and after
navigation using neural-network-assisted tightly coupled Kalman filtering is depicted in
Figure 14.
tightly coupled Kalman filter, and the navigation performance is compared with the na
igation performance before signal correction.
In the case of strong signal interference, a comparison of the effect before and aft
Appl. Sci. 2024, 14, 5493
navigation using neural-network-assisted tightly coupled Kalman filtering20isof depicted
24
Figure 14.

CNN-LSTM unassisted Kalman Filtering CNN-LSTM assisted Kalman Filtering

INS+GNSS Integration-epoch GroundTruth

Figure
Figure14. Comparison
14. Comparison of effect
of the the effect
before before
and afterand after navigation
navigation using neural-network-assist
using neural-network-assisted tightly
tightly coupled
coupled KalmanKalman
filtering.filtering.

The experimental results are analyzed for 2D and Z-direction errors, and error analysis
The experimental results are analyzed for 2D and Z-direction errors, and error an
plots and CDF plots are constructed. As shown in Figure 15. Their specific data are shown
ysis plots 3and
in Tables andCDF
4. plots are constructed. As shown in Figure 15. Their specific data a
shown in Tables 3 and 4.
Table 3. Table of 2D and Z-direction errors for CNN-LSTM unassisted Kalman filtering.

Average Value Variance Standard Deviation


Statistics of 2D error 1.082632 0.198008 0.444981
The statistics of the
0.654914 0.138698 0.372422
Z-direction error

Table 4. Table of 2D and Z-direction errors for CNN-LSTM-assisted Kalman Filtering.

Average Value Variance Standard Deviation


Statistics of 2D error 0.559872 0.063042 0.251082
The statistics of the
0.234357 0.052803 0.229790
Z-direction error

After neural network processing, the localization performance was significantly im-
proved, with a performance improvement of (0.559872 − 1.082632)/1.082632 = 0.4829.
From this, it can be concluded that in scenarios where the signal encounters strong
interference, relying solely on the Kalman filter algorithm for GNSS/INS fusion navigation
will fail to meet the task requirements. However, by incorporating the noise predicted by
the neural network into the covariance matrix of the tightly coupled Kalman filter, there
is a significant improvement in navigation performance compared with when errors are
not corrected.
Appl.
Appl. Sci. 2024, 14,
Sci. 2024, 14, 5493
x FOR PEER REVIEW 21 of
22 of 24
25

Figure 15.
Figure 15. Two-dimensional
Two-dimensional and
and Z-direction
Z-direction error
error analysis
analysis plots
plotsbefore
beforeand
andafter
aftersignal
signalcorrection.
correction.

5. Conclusions
Table 3. Table of 2D and Z-direction errors for CNN-LSTM unassisted Kalman filtering.
In conclusion, this study focuses on the key technologies of the Kalman filter fusion
Average Value Variance Standard Deviation
navigation algorithm assisted by neural networks. By combining the powerful learning
Statistics of 2D error 1.082632 0.198008 0.444981
capabilities of neural networks with the precise estimation capabilities of the Kalman filter,
a The
novelstatistics
solutionof is
theprovided
Z-direc- for UAV navigation technology. This algorithm not only
0.654914 0.138698 0.372422
tion error
significantly improves navigation accuracy and stability but also demonstrates outstanding
adaptability in dealing with complex environments and variable noise interference.
Appl. Sci. 2024, 14, 5493 22 of 24

During UAV missions, the neural-network-based Kalman filter fusion navigation


algorithm can dynamically adjust and optimize the navigation model based on sensor data
in real time, effectively overcoming many limitations of traditional navigation methods.
Whether facing complex natural environments or deliberate interference, this algorithm can
provide precise and reliable navigation services for UAVs due to its excellent performance.
In the construction of neural network architecture, this study combines existing re-
search results and proposes a neural network architecture that combines a CNN with LSTM
to assist the Kalman filter optimization algorithm. By leveraging the advantages of both, the
proposed architecture significantly outperforms traditional neural network architectures,
reducing training result errors by nearly 50% through simulation experiments. Moreover, it
verifies the effective improvement of navigation performance under strong interference
conditions by the optimization algorithm.
Currently, interference in UAV navigation systems includes not only jamming but
also deception techniques, which are widely used. Therefore, in future research, further
exploration of interference techniques will be conducted to consider deception techniques
in the task environment. The neural network architecture will be further optimized to
adapt to even more complex task scenarios.
In summary, the Kalman filter fusion navigation algorithm assisted by neural networks,
with its unique advantages and strong potential, has become an important development
direction in modern UAV navigation technology. The optimization algorithm will play
an increasingly important role in areas such as autonomous driving, UAV cruising, and
intelligent robotics.

Author Contributions: Conceptualization, K.C. and L.Y.; methodology, K.C., P.Z. and L.Y.; software
and validation, K.C. and J.S.; formal analysis, K.C.; investigation, K.C. and P.Z.; resources, J.S.; data
curation, K.C.; writing—original draft preparation, K.C. and J.S.; writing—review and editing, P.Z.
and L.Y.; visualization, L.Y.; supervision, P.Z. All authors have read and agreed to the published
version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data are not publicly available due to the confidential nature of
our school.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Du, H.; Wang, W.; Wang, X.; Zuo, J.; Wang, Y. Scene Image Recognition with Knowledge Transfer for Drone Navigation. J. Syst.
Eng. Electron. 2023, 34, 1309–1318. [CrossRef]
2. Arafat, M.Y.; Alam, M.M.; Moh, S. Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges.
Drones 2023, 7, 89. [CrossRef]
3. Boguspayev, N.; Akhmedov, D.; Raskaliyev, A.; Kim, A.; Sukhenko, A. A Comprehensive Review of GNSS/INS Integration
Techniques for Land and Air Vehicle Applications. Appl. Sci. 2023, 13, 4819. [CrossRef]
4. Urrea, C.; Agramonte, R. Kalman Filter: Historical Overview and Review of Its Use in Robotics 60 Years after Its Creation. J. Sens.
2021, 2021, 9674015. [CrossRef]
5. El-Sheimy, N.; Hou, H.; Niu, X. Analysis and Modeling of Inertial Sensors Using Allan Variance. Instrum. Meas. IEEE Trans. 2008,
57, 140–149. [CrossRef]
6. Kj, N.; Sreejith, A.; Mathew, J.; Sarpotdar, M.; Suresh, A.; Prakash, A.; Safonova, M.; Murthy, J. Noise Modeling and Analysis of
an IMU-Based Attitude Sensor: Improvement of Performance by Filtering and Sensor Fusion. In Proceedings of the SPIE 9912,
Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation II, Edinburgh, UK, 26 June–1 July 2016;
p. 99126W.
7. Galleani, L.; Tavella, P. The Characterization of Clock Behavior with the Dynamic Allan Variance. In Proceedings of the 2003 IEEE
International Frequency Control Symposium and PDA Exhibition Jointly with the 17th European Frequency and Time Forum,
Tampa, FL, USA, 4–8 May 2003; pp. 239–244.
Appl. Sci. 2024, 14, 5493 23 of 24

8. Li, J.; Ding, F. Fitting Nonlinear Signal Models Using the Increasing-Data Criterion. IEEE Signal Process. Lett. 2022, 29, 1302–1306.
[CrossRef]
9. Li, J.; Ding, F. Synchronous Optimization Schemes for Dynamic Systems Through the Kernel-Based Nonlinear Observer Canonical
Form. IEEE Trans. Instrum. Meas. 2022, 71, 6503213. [CrossRef]
10. Li, M.; Liu, X. Particle Filtering-Based Iterative Identification Methods for a Class of Nonlinear Systems with Interval-Varying
Measurements. Int. J. Control Autom. Syst. 2022, 20, 2239–2248. [CrossRef]
11. Guo, J.; Huang, W.; Williams, B.M. Adaptive Kalman Filter Approach for Stochastic Short-Term Traffic Flow Rate Prediction and
Uncertainty Quantification. Transp. Res. Pt. C-Emerg. Technol. 2014, 43, 50–64. [CrossRef]
12. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A Novel Adaptive Kalman Filter With Inaccurate Process and Measurement
Noise Covariance Matrices. IEEE Trans. Autom. Control 2018, 63, 594–601. [CrossRef]
13. Zhang, X.; Ding, F. Optimal Adaptive Filtering Algorithm by Using the Fractional-Order Derivative. IEEE Signal Process. Lett.
2022, 29, 399–403. [CrossRef]
14. Zhang, Y.; Li, M.; Zhang, Y.; Hu, Z.; Sun, Q.; Lu, B. An Enhanced Adaptive Unscented Kalman Filter for Vehicle State Estimation.
IEEE Trans. Instrum. Meas. 2022, 71, 6502412. [CrossRef]
15. Jin, X.-B.; Robert Jeremiah, R.J.; Su, T.-L.; Bai, Y.-T.; Kong, J.-L. The New Trend of State Estimation: From Model-Driven to
Hybrid-Driven Methods. Sensors 2021, 21, 2085. [CrossRef]
16. Zhang, T.; Zhao, S.; Luan, X.; Liu, F. Bayesian Inference for State-Space Models With Student-t Mixture Distributions. IEEE T.
Cybern. 2023, 53, 4435–4445. [CrossRef] [PubMed]
17. Sanger, T. Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network. Neural Netw. 1989, 2, 459–473.
[CrossRef]
18. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput 1997, 9, 1735–1780. [CrossRef] [PubMed]
19. Cho, K.; Merrienboer, B.; Gulcehre, C.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN
Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural
Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014. [CrossRef]
20. Bai, Y.; Yan, B.; Zhou, C.; Su, T.; Jin, X. State of Art on State Estimation: Kalman Filter Driven by Machine Learning. Annu. Rev.
Control 2023, 56, 100909. [CrossRef]
21. Falco, G.; Pini, M.; Marucco, G. Loose and Tight GNSS/INS Integrations: Comparison of Performance Assessed in Real Urban
Scenarios. Sensors 2017, 17, 255. [CrossRef]
22. Mammela, A.; Riekki, J.; Kiviranta, M. Loose Coupling: An Invisible Thread in the History of Technology. IEEE Access 2023, 11,
59456–59482. [CrossRef]
23. Dong, Y.; Wang, D.; Zhang, L.; Li, Q.; Wu, J. Tightly Coupled GNSS/INS Integration with Robust Sequential Kalman Filter for
Accurate Vehicular Navigation. Sensors 2020, 20, 561. [CrossRef]
24. He, Y.; Li, J.; Liu, J. Research on GNSS INS & GNSS/INS Integrated Navigation Method for Autonomous Vehicles: A Survey.
IEEE Access 2023, 11, 79033–79055. [CrossRef]
25. Xu, Y.; Wang, K.; Yang, C.; Li, Z.; Zhou, F.; Liu, D. GNSS/INS/OD/NHC Adaptive Integrated Navigation Method Considering
the Vehicle Motion State. IEEE Sens. J. 2023, 23, 13511–13523. [CrossRef]
26. Adaptive Fuzzy Neural Network-Aided Progressive Gaussian Approximate Filter for GPS/INS Integration Navigation-Web of
Science Core Collection. Available online: https://webofscience.clarivate.cn/wos/woscc/full-record/WOS:000843006900005
(accessed on 4 April 2024).
27. Jwo, D.-J.; Biswal, A.; Mir, I.A. Artificial Neural Networks for Navigation Systems: A Review of Recent Research. Appl. Sci.-Basel
2023, 13, 4475. [CrossRef]
28. Du, K.-L.; Leung, C.-S.; Mow, W.H.; Swamy, M.N.S. Perceptron: Learning, Generalization, Model Selection, Fault Tolerance, and
Role in the Deep Learning Era. Mathematics 2022, 10, 4730. [CrossRef]
29. Laudani, A.; Lozito, G.M.; Fulginei, F.R.; Salvini, A. On Training Efficiency and Computational Costs of a Feed Forward Neural
Network: A Review. Comput. Intell. Neurosci. 2015, 2015, 818243. [CrossRef] [PubMed]
30. Shafiq, M.; Gu, Z. Deep Residual Learning for Image Recognition: A Survey. Appl. Sci. 2022, 12, 8972. [CrossRef]
31. Zhu, J.; Jiang, Q.; Shen, Y.; Qian, C.; Xu, F.; Zhu, Q. Application of Recurrent Neural Network to Mechanical Fault Diagnosis: A
Review. J. Mech. Sci. Technol. 2022, 36, 527–542. [CrossRef]
32. Bian, J.; Liu, Z.; Tao, Y.; Wang, Z.; Zhao, X.; Lin, Y.; Xu, H.; Liu, Y. Advances in Memristor Based Artificial Neuron Fabrication-
Materials, Models, and Applications. Int. J. Extreme Manuf. 2024, 6, 012002. [CrossRef]
33. Kaur, R.; Roul, R.K.; Batra, S. Multilayer Extreme Learning Machine: A Systematic Review. Multimed. Tools Appl. 2023, 82,
40269–40307. [CrossRef]
34. Zhou, J.; Ma, Q. Establishing a Genetic Algorithm-Back Propagation Model to Predict the Pressure of Girdles and to Determine
the Model Function. Text. Res. J. 2020, 90, 2564–2578. [CrossRef]
35. Huang, S.-Y.; An, W.-J.; Zhang, D.-S.; Zhou, N.-R. Image Classification and Adversarial Robustness Analysis Based on Hybrid
Convolutional Neural Network. Opt. Commun. 2023, 533, 129287. [CrossRef]
36. Lu, T.-C. CNN Convolutional Layer Optimisation Based on Quantum Evolutionary Algorithm. Connect. Sci. 2021, 33, 482–494.
[CrossRef]
Appl. Sci. 2024, 14, 5493 24 of 24

37. Zhao, L.; Zhang, Z. A Improved Pooling Method for Convolutional Neural Networks. Sci. Rep. 2024, 14, 1589. [CrossRef]
[PubMed]
38. Zheng, T.; Wang, Q.; Shen, Y.; Lin, X. Gradient Rectified Parameter Unit of the Fully Connected Layer in Convolutional Neural
Networks. Knowl. -Based Syst. 2022, 248, 108797. [CrossRef]
39. Bao, G.; Song, Z.; Xu, R. Prescribed Attractivity Region Selection for Recurrent Neural Networks Based on Deep Reinforcement
Learning. Neural Comput. Appl. 2024, 36, 2399–2409. [CrossRef]
40. An Integrated INS/GNSS System with an Attention-Based Hierarchical LSTM during GNSS Outage-Web of Science Core
Collection. Available online: https://webofscience.clarivate.cn/wos/woscc/full-record/WOS:000937182700002 (accessed on
4 April 2024).
41. Ehteram, M.; Ahmed, A.N.; Khozani, Z.S.; El-Shafie, A. Graph Convolutional Network-Long Short Term Memory Neural
Network- Multi Layer Perceptron- Gaussian Progress Regression Model: A New Deep Learning Model for Predicting Ozone
Concertation. Atmos. Pollut. Res. 2023, 14, 101766. [CrossRef]
42. Dao, F.; Zeng, Y.; Qian, J. Fault Diagnosis of Hydro-Turbine via the Incorporation of Bayesian Algorithm Optimized CNN-LSTM
Neural Network. Energy 2024, 290, 130326. [CrossRef]
43. Bao, T.; Zhao, Y.; Zaidi, S.A.R.; Xie, S.; Yang, P.; Zhang, Z. A Deep Kalman Filter Network for Hand Kinematics Estimation Using
sEMG. Pattern Recognit. Lett. 2021, 143, 88–94. [CrossRef]
44. Zhang, P.; Li, C.; Peng, C.; Tian, J. Ultra-Short-Term Prediction of Wind Power Based on Error Following Forget Gate-Based Long
Short-Term Memory. Energies 2020, 13, 5400. [CrossRef]
45. Electric Vehicle Battery State of Charge Estimation With an Ensemble Algorithm Using Central Difference Kalman Filter
(CDKF) and Non-Linear Autoregressive With Exogenous Input (NARX)-Web of Science Core Collection. Available online:
https://webofscience.clarivate.cn/wos/woscc/full-record/WOS:001180986800001 (accessed on 4 April 2024).
46. Tian, Y.; Yang, S.; Zhang, R.; Tian, J.; Li, X. State of Charge Estimation of Lithium-Ion Batteries Based on Ultrasonic Guided Waves
by Chirped Signal Excitation. J. Energy Storage 2024, 84, 110897. [CrossRef]
47. Attention-SP-LSTM-FIG: An Explainable Neural Network Model for Productivity Prediction in Aircraft Final Assembly Lines-Web
of Science Core Collection. Available online: https://webofscience.clarivate.cn/wos/woscc/full-record/WOS:001185014300001
(accessed on 4 April 2024).

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like