ai-05-00074
ai-05-00074
1 Predictive Society and Data Analytics Lab, Faculty of Information Technology and Communication Sciences,
Tampere University, 33720 Tampere, Finland; [email protected] (A.F.); [email protected] (O.Y.-H.)
2 Faculty of Engineering and Information Technology, Taiz University, Taiz P.O. Box 6803, Yemen
3 Institute for Systems Biology, Seattle, WA 98195, USA
* Correspondence: [email protected]
2. Background
Neural networks, a subset of machine learning models [19], have achieved remarkable
success in various applications, ranging from computer vision [20–22], speech recogni-
tion [23], and natural language processing [24–27] to material science, fluid mechanics [28],
genetics [29], cognitive science [30], genomics [31], and infrastructural health monitor-
ing [32]. These models consist of multiple layers of interconnected neurons that process
and learn from vast amounts of data. Through training utilizing techniques such as
back-propagation and gradient descent, NNs iteratively adjust their weights to minimize
prediction errors, thereby improving their performance.
Interestingly, the remarkable success of NNs in data-driven applications does not
seamlessly extend to problems governed by complex physical laws. Many scientific and
engineering problems, such as fluid dynamics, structural mechanics, and heat transfer, are
described by PDEs that encapsulate the underlying physical principles. Unfortunately,
traditional NNs do not inherently understand these physical principles but rely solely
on data to learn accurate representations, and for this reason may produce physically
inconsistent results. This limitation is particularly pronounced in scenarios where data are
scarce or noisy.
To improve this shortcoming, PINNs have emerged as a powerful paradigm that
integrates physical laws directly into the machine learning process. The core idea behind
PINNs is to embed the governing equations of the physical system into the neural network’s
architecture or loss function. This is typically achieved by incorporating these equations
into the loss function, which the network seeks to optimize during training. By doing so,
PINNs ensure that the learned solutions not only fit the data but also adhere to the physical
constraints imposed by the PDEs. This approach reduces the reliance on large datasets and
leverages the strengths of both physics-based modeling and data-driven learning, resulting
in models that are more accurate, generalizable, and interpretable.
The development of PINNs can be seen as part of a broader movement towards hybrid
models that combine the deductive rigor of classical physics with the inductive power of
machine learning. This fusion offers several advantages. First, it reduces the dependency
on large datasets, as the physical laws provide a priori knowledge that guides the learning
process. Second, it enhances the robustness and reliability of the models, ensuring that
the predictions are physically consistent even in regions where data are sparse. Third, it
opens up new avenues for solving inverse problems, where the goal is to infer unknown
parameters or inputs from observed outputs, by leveraging both data and physical laws.
AI 2024, 5 1536
In this survey, we explore PINNs, focusing on their core methodologies and innova-
tions. We review key applications across diverse fields, compare PINNs with traditional
numerical methods, and discuss their advantages and challenges. We also examine recent
trends and future directions, identifying potential areas for further research and applica-
tion. This includes highlighting the benefits of PINNs over traditional methods and other
machine learning approaches, as well as their limitations.
3. Methodologies
Deep learning has become widely adopted in scientific and engineering fields due to
its ability to model complex relationships in various data types without requiring users to
understand the system’s physical parameters. Despite its versatility and effectiveness, chal-
lenges such as insufficient training data, lack of interpretability, and disregard for physical
laws persist. To address these issues, enhanced methods have been developed, including
physics-informed deep learning [7,11,40–43]. In this section, we discuss the methodologies
employed in various studies related to PINNs. Our aim is to provide an overview of
the diverse approaches utilized in the development, implementation, and evaluation of
PINNs across different domains. From the choice of neural network architectures to the
integration of physics-based constraints, each methodology offers unique insights into the
fusion of machine learning and physics. By examining the methodologies in detail, we
aim to uncover commonalities, challenges, and opportunities within the burgeoning field
of PINNs.
The integration of prior knowledge into PINNs can be achieved through feature
engineering, model construction, and regularization, as illustrated in Figure 1. The ap-
proach depends on the problem’s nature and the type of neural network model being
employed. By leveraging domain-specific knowledge and known physical principles,
PINNs can achieve more accurate and interpretable predictions for various scientific and
engineering applications.
Prior Knowledge
PINN
NN
Define Problem Data Acquisition Pre-process Data NN Model Cost Function Train Model Evaluation Model
Figure 1. The general workflow for integrating prior knowledge into a neural network architecture to
create a Physics-Informed Neural Network (PINN).
AI 2024, 5 1537
z = Wx + b (1)
y = f (z) (2)
In the context of multiple layers, denoted by n, the process is iterated for each layer.
Specifically, for the i-th layer, the output y(i) is computed as follows:
z ( i ) = W ( i ) y ( i −1) + b ( i ) (3)
y (i ) = f ( z (i ) ) (4)
where y(0) represents the input vector x, and W (i) and b(i) denote the weight matrix and
bias vector of the i-th layer, respectively. This recursive computation through layers enables
the network to capture complex relationships within the data. Moreover, during training,
the network aims to minimize a predefined cost function J, often represented as the
mean squared error (MSE) for regression problems or cross-entropy for classification tasks.
For example, in the case of MSE, the cost function can be written as follows:
AI 2024, 5 1538
m
1
J (W, b) =
m ∑ (yi − ŷi )2 (5)
i =1
where m is the number of training examples, yi represents the true output, and ŷi represents
the predicted output.
ANNs are often used in PINNs for their simplicity and effectiveness in learning
complex relationships; see [53–58].
Convolutional Neural Networks (CNNs): A CNN shares similarities with ANNs but
includes convolution layers [59–61]. CNNs are particularly useful for problems involving
grid-structured data, such as images or spatio-temporal data [59,62]. However, they can
also accommodate numerical data with the utilization of a 1D CNN as the foundational
model. The distinguishing characteristic of CNNs lies in their utilization of convolution
filters, also known as kernels. These filters, square in shape, are applied to the input pixels
through a process of convolution, traversing each square defined by a user-specified filter
size and stride. By consistently employing the same filter across the input pixels, CNNs
manage to maintain a concise set of hyperparameters [63].
A convolutional layer consists of multiple kernels that generate feature maps. Each
neuron in a feature map is connected to a localized region (receptive field) in the previous
layer. To obtain a feature map, the input is convolved with a kernel, followed by applying
a nonlinear activation function [60].
l
Mathematically, the feature value zi,j,k at position (i, j) in the k-th feature map of the
l-th layer is computed as follows:
T
l
zi,j,k = wkl xi,j
l
+ bkl (6)
where wkl is the k-th filter’s weight vector, bkl is its bias, and xi,j
l is the input patch at (i, j ).
Subsequently, pooling layers are often employed to downsample the feature maps
obtained from the convolutional layers, reducing dimensionality while preserving impor-
tant information. A common pooling operation is max pooling, where the maximum value
within a local region is retained.
l
yi,j,k = pool alm,n,k , ∀(m, n) ∈ Ri,j (8)
where (m, n) represents the local neighborhood around (i, j), and Ri,j is the region of
the input map being pooled. Common pooling operations include average pooling and
max pooling.
In CNNs, the training process typically involves minimizing a loss function Ltotal ,
which combines a standard loss function Lstd with a regularization term to prevent overfit-
ting. This regularization term can include weight decay or dropout, among others. The total
loss function is defined as follows:
enabling them not only to generate outputs at specific time points but also to transmit
hidden states to subsequent layers. Depending on the RNN variant, outputs at each time
step may also be relayed forward. Conceptually, RNNs can be thought of as multiple
instances of a standard ANN, with each instance communicating information to its succes-
sor [73–75]. This adaptation empowers RNNs to model data across individual time steps
and the interconnections among them.
RNN architectures can vary to suit different problem types. Structurally, RNNs may
operate in one-to-one, one-to-many, many-to-one, or many-to-many configurations [76,77].
A notable application of the many-to-many RNN structure is found in machine translation
tasks. For instance, in a translation scenario, the encoder component of the network receives
input words in English, while the decoder component generates corresponding translated
words in French.
In dynamical systems, governing equations often describe the system’s behavior over
time, frequently taking the form of differential equations. These equations represent a
valuable source of prior knowledge.
RNNs are designed to handle sequential data by maintaining an internal state or
memory of past inputs. Mathematically, a recurrent neural network can be represented
as follows:
ht = f (Whh ht−1 + Wxh xt + bh )
(10)
yt = g(Why ht + by )
where ht is the hidden state at time t; xt is the input at time t; Whh , Wxh , Why are weight
matrices; bh and by are bias vectors; f is the activation function for the hidden layer;
and g is the activation function for the output layer. For a given sequence of inputs
X = ( x1 , x2 , . . . , x T ), the output sequence Y = (y1 , y2 , . . . , y T ) is obtained by iterating the
above equations through time steps t = 1, 2, . . . , T.
The training of RNNs involves optimizing a cost function J that measures the discrep-
ancy between the predicted outputs and the actual outputs. One common choice for the
cost function is the mean squared error:
T
1
J=
T ∑ ∥yt − ŷt ∥2 (11)
t =1
x̂ = U T x (12)
where N is the number of nodes, K is the number of classes, yi,k is the true label for node i
and class k, and ŷi,k is the predicted probability of node i belonging to class k.
In the context of PINNs, GNNs can be used to model complex physical systems
characterized by interconnected components, such as molecular dynamics simulations or
social network analysis; see [85,88–92].
Attention Mechanisms: Attention mechanisms allow for neural networks (NNs) to
focus on specific parts of the input data, enabling them to weigh the importance of different
features dynamically [93,94]. Mathematically, attention mechanisms can be represented as
follows: consider a neural network f with parameter θ, and let x represent the input data.
The attention mechanism A( x ) assigns weights to different parts of the input, allowing for
the network to focus on the most relevant information. This attention mechanism can be
formulated as follows:
where Wx , bx , Wa , and ba are learnable parameters, and softmax is the softmax function.
The attention-weighted input x̃ is then computed as the element-wise product of the
attention weights and the input:
x̃ = A( x ) ⊙ x (17)
The attention-weighted input x̃ is then passed through the neural network f to obtain
the output y:
y = f ( x̃; θ ) (18)
Finally, the cost function J of the PINN, which incorporates the attention mechanism,
can be defined as follows:
N
1
J (θ ) =
N ∑ L(yi , ŷi ) + λR(θ ) (19)
i =1
where N is the number of data points, L is the loss function, ŷi is the ground truth label,
and λ is the regularization parameter. The term R(θ ) represents the regularization term,
which penalizes large values of the network parameters θ to prevent overfitting.
In PINNs, attention mechanisms can enhance performance by selectively attend-
ing to relevant spatial or temporal features, particularly in problems with large or high-
dimensional input spaces; see [95–99].
Generative models: Deep neural networks can be categorized into discriminative [100]
and generative models [101] based on their function. Discriminative models predict the
target variable given input variables, while generative models model the conditional
probability of observable variables given the target. Two prominent generative models are
variational autoencoders (VAEs) [102] and generative adversarial networks (GANs) [103].
Mathematically, a VAE can be represented as follows: Let X represent the input data, Z
represent the latent space, and θ represent the parameters of the VAE. The encoder qθ ( Z | X )
maps input data to the latent space distribution, and the decoder pθ ( X | Z ) reconstructs the
AI 2024, 5 1541
input data from the latent space. The objective is to maximize the evidence lower bound
(ELBO) given by
where p( Z ) is the prior distribution over the latent space and KL denotes the Kullback–
Leibler divergence.
A GAN can be represented as follows:
In a GAN, a generator network G produces samples from a prior distribution pdata ( X ),
and a discriminator network D distinguishes between real and generated samples. The ob-
jective of the generator is to minimize the following cost function:
convergence rates [115]. Methods incorporating physical equations into deep learning
model loss functions, such as those introduced by [41], enable training without labeled
data, providing accurate predictions while respecting problem constraints and quantifying
predictive uncertainty across diverse scenarios.
Hybrid Approaches combine elements of multiple techniques [6,34], such as physics-
based loss functions and constraint-based methods, to simultaneously enforce physical con-
straints and data-driven objectives. Leveraging the complementary strengths of different
techniques, hybrid approaches offer a versatile and effective framework for incorporating
physical laws into PINNs.
Furthermore, advancements in optimization techniques, including physics-informed
stochastic gradient descent and adaptive learning rates, have contributed to the robust-
ness and efficiency of PINNs. In this context, Yang et al. [116] introduced the Bayesian
physics-informed neural network (B-PINN), which integrates Bayesian neural networks
(BNNs) [117] and PINNs to tackle both forward and inverse nonlinear problems involving
PDEs and noisy data. B-PINNs leverage physical principles and noisy measurements within
a Bayesian framework to provide predictions and evaluate uncertainty. Unlike PINNs, B-
PINNs offer more accurate predictions and are better equipped to manage significant levels
of noise by addressing overfitting. A systematic comparison between Hamiltonian Monte
Carlo (HMC) [118] and variational inference (VI) [119] indicates a preference for HMC for
posterior estimation. Furthermore, the substitution of the BNN in the prior with a truncated
Karhunen–Loève (KL) expansion combined with HMC or a deep normalizing flow (DNF)
model [120] shows potential but lacks scalability for high-dimensional problems.
4. Utility of PINNs
Differential equations are essential for understanding the dynamic behaviors of phys-
ical systems, as they describe how variables change relative to each other and enable
predictions about future behavior. However, real-world systems often face challenges due
to partially known differential equations:
1. Unknown Parameters: For example, in wind engineering [121], the equations of fluid
dynamics are established, but coefficients related to turbulent flow are uncertain.
2. Unknown Functional Forms: In chemical engineering [122,123], the exact form of rate
equations can be unclear due to uncertainties in reaction pathways.
3. Unknown Forms and Parameters: In battery state modeling [123,124], equivalent
circuit models partially capture the current–voltage relationship, and key parameters
like resistance and capacitance are unknown.
This partial knowledge hinders our understanding and control of these systems,
making it essential to infer unknown components from observed data, a process known as
system identification. This helps in predicting system states, informing control strategies,
and enabling theoretical analysis.
Recently, Zhang et al. [125] proposed a strategy using PINN and symbolic regression
to discover an unknown mathematical model in a system of ODEs. Though focused on
Alzheimer’s disease modeling, their approach shows potential for general equation discovery.
For solving differential equations using PINNs as shown below, various approaches
exist. The overall process is illustrated in Figure 2, where the residuals of the differential
equations are integrated into the loss function to enhance the accuracy of the neural network
model. Here, ϵ denotes the acceptable margin of loss.
AI 2024, 5 1543
NN
Hidden Layers
DE
∂
σ σ
Input
... ∂t
⃗x σ σ
Output
∂
ŷ ∂x f = N (ŷ)
σ σ
NN Residual
t
...
DE Residual
σ σ ..
.
NO
YES
Done <ϵ? Loss
PINN
Figure 2. The process depicted involves integrating the residuals of the differential equations (DEs)
with dynamics N () into the loss function to improve the accuracy of the neural network (NN)
model. Here, ϵ represents the acceptable margin of loss. In this context, the primary role of the
neural network is to identify the optimal parameters that minimize the specified loss function while
adhering to the initial and boundary conditions. Meanwhile, the supplementary cost function ensures
that the constraints of the ODEs/PDEs are met, representing the physical information aspect of the
neural network.
Regularization techniques add a penalty term to the objective function to control the
complexity of the solution and prevent overfitting. This penalty term typically depends on
some measure of the complexity of the parameter vector x, denoted as Ω(x). A common
form of regularization is Tikhonov regularization (also known as ridge regression), where
the objective function becomes
Here, λ is a regularization parameter that controls the trade-off between fitting the
data and the complexity of the solution. The term ||Ω(x)||2 represents a measure of the
norm of the parameter vector x, which could be the L2-norm or another norm depending
on the specific regularization technique used.
In the context of PINNs, the regularization term Ω(x) could incorporate prior knowl-
edge about the physical principles governing the problem.
Bayesian inference: Such methods provide a probabilistic framework for solving
inverse problems by estimating the posterior distribution of input parameters based on
observed data [116]. This posterior distribution, p(θ | D ), can be approximated using PINNs,
which train a neural network to map input parameters to the posterior distribution [117].
Mathematically, we express this as follows:
p( D |θ ) · p(θ )
p(θ | D ) = (24)
p( D )
where p(θ | D ) is the posterior distribution of the parameters given the data, p( D |θ ) is the
likelihood function, representing the probability of observing the data given the parameters,
and p(θ ) is the prior distribution, representing our initial belief about the parameters before
observing any data.
The network’s output, p̂(θ | D ), approximates the true posterior distribution. By incor-
porating physics-based priors, which encode known physical laws or constraints, PINNs
enhance the accuracy and reliability of parameter estimates, even with limited and noisy
data. This approach leverages domain knowledge during the inference process, producing
more accurate results.
Data assimilation: By employing data assimilation techniques, observational data can
be combined with numerical models to produce optimal estimates of the state variables and
AI 2024, 5 1545
parameters of a dynamical system [112]. PINNs can be used to assimilate observational data
into physics-based models, effectively constraining the model predictions to be consistent
with the observed data.
A basic mathematical representation for using data assimilation in inverse problems is
formulated as follows: let yobs denote the observed data, x represent the unknown state
or parameters to be estimated, and y(x) denote the model predictions given the state or
parameters x.
In data assimilation, the goal is to find the optimal estimate xest that minimizes
the discrepancy between the observed data and the model predictions, often subject to
additional constraints or regularization terms. This can be formulated as an optimization
problem, typically solved using optimization techniques such as variational methods or
Bayesian inference.
A common approach is to define a cost function that measures the misfit between the
observed data and the model predictions, along with any additional regularization terms.
The optimal estimate xest is then obtained by minimizing this cost function:
where J (x) is the cost function, often defined as the sum of a data misfit term and a
regularization term:
J (x) = Jdata (x) + Jreg (x) (26)
The data misfit term, Jdata (x), quantifies the mismatch between the observed data and
the model predictions:
Jdata (x) = ∥yobs − y(x)∥2 (27)
The regularization term, Jreg (x), imposes additional constraints or penalties to ensure
the solution is stable or adheres to prior knowledge:
The specific choice of regularization function depends on the problem and the desired
properties of the solution.
Multi-physics methods: In many inverse problems, the underlying physical processes
are governed by multiple interacting phenomena, making the problem inherently multi-
physics [7,12,43,129]. PINNs can couple multiple physics-based models to solve the inverse
problem in a unified framework. This involves incorporating multiple physics-based
models into a system of coupled PDEs. By leveraging the expressive power of neural
networks, PINNs can simultaneously estimate multiple input parameters or conditions
from observed data, even in complex and highly coupled systems.
The mathematical representation of multi-physics coupling using PINNs involves
incorporating multiple physics-based models into a unified framework. Let f i (ui , x ) denote
the governing equation for the i-th physical process, where ui represents the solution vari-
able associated with that process and x denotes the spatial coordinates. The multi-physics
coupling is achieved by simultaneously solving the following system of coupled PDEs:
N
∂u1
= f 1 (u1 , x ) + ∑ γ1j · g1j (u j , x )
∂t j =2
N
∂u2
= f 2 (u2 , x ) + ∑ γ2j · g2j (u j , x )
∂t j =1 (29)
..
.
N −1
∂u N
= f N (u N , x ) + ∑ γ Nj · g Nj (u j , x )
∂t j =1
AI 2024, 5 1546
where N is the number of coupled physics processes, γij represents the coupling coefficients,
and gij (u j , x ) describes the influence of the solution variable u j from the j-th physics process
on the i-th process. By leveraging the expressive power of neural networks, PINNs provide
a flexible and efficient framework for solving such multi-physics problems.
Model Calibration and Uncertainty Quantification: Inverse problems often require
not only estimating the input parameters but also quantifying uncertainty in the estimated
solution. PINNs can be trained to produce not only point estimates of the input parameters
but also probability distributions that quantify uncertainty in the estimated solution. By in-
corporating physics-based priors and regularization constraints into the training process,
PINNs can produce more reliable estimates of the input parameters and a more accurate
quantification of uncertainty in the estimated solution.
Let y denote the observed data or measurements, m the parameters of the model to be
estimated, f(m) the forward model which maps parameters to predicted data, e the error or
noise in the observations, and u the uncertainty associated with the estimated parameters.
Then, in a mathematical form, the model calibration and uncertainty quantification
problem can be represented as follows:
1. Forward Model:
y = f(m) + e (30)
2. Parameter Estimation:
m̂ = arg min ||y − f(m)||2 (31)
m
3. Uncertainty Quantification:
u = F (m) (32)
where || · || denotes a norm, such as the Euclidean norm, and F (·) represents a function that
quantifies uncertainty in the estimated parameters often through techniques like Bayesian
inference or Monte Carlo methods.
5. Applications
PINNs have shown remarkable potential across a spectrum of scientific and engi-
neering disciplines. In this section, we highlight some applications from fluid dynamics,
material science, quantum mechanics, and other fields. Table 1 shows an overview where
the columns highlight different features. We would like to note that we also included PINNs
for more general equations (GEs) such as models for predicting energy states [132,133] or
drug responses [134,135].
AI 2024, 5 1547
Table 1. The application of PINN for tackling Forward Solution (FS) and Parameter Estimation (PE)
in various contexts involving Ordinary Differential Equations (ODEs), Partial Differential Equations
(PDEs), and Generalized Equations (GEs).
with tailored properties [139]. They are also instrumental in studying phase transitions and
microstructural evolution in materials, providing insights into processes like solidification
and grain growth [138].
5.5. Geophysics
In geophysics, PINNs assist in interpreting seismic data to infer subsurface properties,
aiding in oil and gas exploration and earthquake hazard assessment [143]. They are also
used to model groundwater flow and contaminant transport, which support the man-
agement of water resources and environmental remediation efforts [142]. In biomedical
engineering, PINNs are employed to simulate blood flow and cardiac mechanics, thereby
contributing to the understanding and treatment of cardiovascular diseases [147]. Addi-
tionally, they are used to model tumor growth and spread, offering tools for predicting
cancer progression and optimizing treatment strategies [148].
5.7. Oncology
The use of PINNs in cancer research has also shown significant promise. PINNs
integrate physical laws and domain knowledge into models, which is particularly useful
for understanding complex biological systems like cancer. They have been employed to
model tumor growth by solving PDEs, providing accurate predictions of tumor behavior
and enabling personalized treatment planning by estimating key parameters [145]. In drug
response optimization [134,135], PINNs predict the pharmacokinetics and pharmacody-
namics of anticancer drugs, helping to optimize dosing regimens and maximize therapeutic
efficacy. For tumor-immune system interactions [146], PINNs model the dynamics of
immune cell infiltration and the effects of immunotherapies, aiding in the design of new
treatment strategies.
accuracy. Imbalanced data skew model training, while data heterogeneity requires robust
preprocessing to ensure compatibility. Limited input ranges hinder extrapolation, and data
biases affect generalization, necessitating meticulous data collection. Data interpretability
remains challenging in high-dimensional spaces. Integrating PINNs with traditional NNs
can mitigate these issues by functioning as an unsupervised method, eliminating the need
for labeled data, and converting PDE problems into loss function optimization. The PINN
algorithm embeds the mathematical model in the network, incorporating a residual term
from the governing equation to constrain the solution space.
larization, dropout, and data augmentation. Conversely, underfitting, where the model
is too simplistic to capture data patterns, requires more complex architectures and addi-
tional training data. Domain shift, where training and test data distributions differ, can
degrade performance and is mitigated through domain adaptation and transfer learning.
Out-of-distribution (OOD) detection is essential for identifying unreliable predictions,
especially in critical applications, and involves methods like uncertainty estimation and
anomaly detection. Transfer learning and domain adaptation enhance generalization by
leveraging knowledge from related domains or tasks. Addressing these challenges requires
algorithmic improvements, model regularization, data augmentation, and domain-specific
expertise, ultimately enhancing the reliability, performance, and safety of PINN models
across various applications.
7. Future Directions
As PINNs continue to evolve, several exciting directions for future research and
development emerge. In this section, we explore some of the potential future directions for
advancing the field of PINNs.
8. Conclusions
PINNs have emerged as a powerful framework for integrating physics-based modeling
with data-driven learning that enable us to tackle complex scientific and engineering
problems across diverse domains. In this survey, we explore the principles, applications,
challenges, and future directions of PINNs, highlighting their potential to revolutionize
scientific discovery, engineering design, and industrial innovation. The applications of
PINNs span a wide range of domains, including fluid dynamics, material science, quantum
mechanics, medicine, and climate research, demonstrating their versatility and effectiveness
in addressing complex problems.
Looking ahead, the future of PINNs is promising, with exciting opportunities for inno-
vation and discovery across a wide range of scientific and engineering domains. However,
AI 2024, 5 1552
further work is needed to develop systematic methods for optimizing network architectures
for specific problems. Additionally, creating a taxonomy of approaches to different dynamic
systems would facilitate easier application by others. Finally, extending PINNs into an
online learning paradigm would enable their applicability in digital twin research.
Author Contributions: Conceptualization, A.F. and F.E.-S.; methodology, A.F. and F.E.-S.; writing—
original draft preparation, A.F.; writing—review and editing, A.F., O.Y.-H., and F.E.-S. All authors
have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Gurney, K. An Introduction to Neural Networks; CRC Press: Boca Raton, FL, USA, 2018.
2. Crick, F. The recent excitement about neural networks. Nature 1989, 337, 129–132. [CrossRef]
3. Emmert-Streib, F. A heterosynaptic learning rule for neural networks. Int. J. Mod. Phys. C 2006, 17, 1501–1520. [CrossRef]
4. Bishop, C.M. Neural networks and their applications. Rev. Sci. Instruments 1994, 65, 1803–1832. [CrossRef]
5. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network
applications: A survey. Heliyon 2018, 4, e00938. [CrossRef]
6. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific machine learning through physics–informed
neural networks: Where we are and what’s next. J. Sci. Comput. 2022, 92, 88. [CrossRef]
7. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward
and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [CrossRef]
8. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys.
2021, 3, 422–440. [CrossRef]
9. Mowlavi, S.; Nabi, S. Optimal control of PDEs using physics-informed neural networks. J. Comput. Phys. 2023, 473, 111731.
[CrossRef]
10. Shin, Y.; Darbon, J.; Karniadakis, G.E. On the Convergence of Physics Informed Neural Networks for Linear Second-Order
Elliptic and Parabolic Type PDEs. Commun. Comput. Phys. 2020, 28. [CrossRef]
11. Meng, X.; Li, Z.; Zhang, D.; Karniadakis, G.E. PPINN: Parareal physics-informed neural network for time-dependent PDEs.
Comput. Methods Appl. Mech. Eng. 2020, 370, 113250. [CrossRef]
12. Cai, S.; Mao, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for fluid mechanics: A review.
Acta Mech. Sin. 2021, 37, 1727–1738. [CrossRef]
13. Chen, Y.; Lu, L.; Karniadakis, G.E.; Dal Negro, L. Physics-informed neural networks for inverse problems in nano-optics and
metamaterials. Opt. Express 2020, 28, 11618–11633. [CrossRef] [PubMed]
14. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [CrossRef]
15. Mohri, M.; Rostamizadeh, A.; Talwalkar, A. Foundations of Machine Learning; MIT Press: Cambridge, MA, USA, 2018.
16. Yazdani, S.; Tahani, M. Data-driven discovery of turbulent flow equations using physics-informed neural networks. Phys. Fluids
2024, 36, 035107. [CrossRef]
17. Camporeale, E.; Wilkie, G.J.; Drozdov, A.Y.; Bortnik, J. Data-driven discovery of Fokker-Planck equation for the Earth’s radiation
belts electrons using Physics-Informed neural networks. J. Geophys. Res. Space Phys. 2022, 127, e2022JA030377. [CrossRef]
18. Chen, Z.; Liu, Y.; Sun, H. Physics-informed learning of governing equations from scarce data. Nat. Commun. 2021, 12, 6136.
[CrossRef]
19. Emmert-Streib, F.; Moutari, S.; Dehmer, M. Elements of Data Science, Machine Learning, and Artificial Intelligence Using R; Springer:
Berlin/Heidelberg, Germany.
20. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf.
Process. Syst. 2012, 25. [CrossRef]
21. Sallam, A.A.; Amery, H.A.; Saeed, A.Y. Iris recognition system using deep learning techniques. Int. J. Biom. 2023, 15, 705–725.
[CrossRef]
22. Sallam, A.A.; Mohammed, B.A.; Abdulbari, M. A Dorsal Hand Vein Recognition System based on Various Machine and Deep
Learning Classification Techniques. In Proceedings of the IEEE 2023 3rd International Conference on Computing and Information
Technology (ICCIT), Tabuk, Saudi Arabia, 13–14 September 2023; pp. 493–501.
23. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.r.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N.; et al. Deep
neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag.
2012, 29, 82–97. [CrossRef]
AI 2024, 5 1553
24. Farea, A.; Tripathi, S.; Glazko, G.; Emmert-Streib, F. Investigating the optimal number of topics by advanced text-mining
techniques: Sustainable energy research. Eng. Appl. Artif. Intell. 2024, 136, 108877. [CrossRef]
25. Farea, A.; Emmert-Streib, F. Experimental Design of Extractive Question-Answering Systems:: Influence of Error Scores and
Answer Length. J. Artif. Intell. Res. 2024, 80, 87–125. [CrossRef]
26. Sharma, T.; Farea, A.; Perera, N.; Emmert-Streib, F. Exploring COVID-related relationship extraction: Contrasting data sources
and analyzing misinformation. Heliyon 2024, 10, e26973. [CrossRef] [PubMed]
27. Wu, Y.; Schuster, M.; Chen, Z.; Le, Q.V.; Norouzi, M.; Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K.; et al. Google’s
neural machine translation system: Bridging the gap between human and machine translation. arXiv 2016, arXiv:1609.08144.
28. Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 2020, 52, 477–508.
[CrossRef]
29. Libbrecht, M.W.; Noble, W.S. Machine learning applications in genetics and genomics. Nat. Rev. Genet. 2015, 16, 321–332.
[CrossRef]
30. Lake, B.M.; Salakhutdinov, R.; Tenenbaum, J.B. Human-level concept learning through probabilistic program induction. Science
2015, 350, 1332–1338. [CrossRef]
31. Alipanahi, B.; Delong, A.; Weirauch, M.T.; Frey, B.J. Predicting the sequence specificities of DNA-and RNA-binding proteins by
deep learning. Nat. Biotechnol. 2015, 33, 831–838. [CrossRef] [PubMed]
32. Rafiei, M.H.; Adeli, H. A novel machine learning-based algorithm to detect damage in high-rise building structures. Struct. Des.
Tall Spec. Build. 2017, 26, e1400. [CrossRef]
33. Kim, S.W.; Kim, I.; Lee, J.; Lee, S. Knowledge Integration into deep learning in dynamical systems: An overview and taxonomy.
J. Mech. Sci. Technol. 2021, 35, 1331–1342. [CrossRef]
34. Lawal, Z.K.; Yassin, H.; Lai, D.T.C.; Che Idris, A. Physics-informed neural network (PINN) evolution and beyond: A systematic
literature review and bibliometric analysis. Big Data Cogn. Comput. 2022, 6, 140. [CrossRef]
35. Sharma, P.; Chung, W.T.; Akoush, B.; Ihme, M. A review of physics-informed machine learning in fluid mechanics. Energies 2023,
16, 2343. [CrossRef]
36. Kashinath, K.; Mustafa, M.; Albert, A.; Wu, J.; Jiang, C.; Esmaeilzadeh, S.; Azizzadenesheli, K.; Wang, R.; Chattopadhyay, A.;
Singh, A.; et al. Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. R. Soc. A
2021, 379, 20200093. [CrossRef] [PubMed]
37. Latrach, A.; Malki, M.L.; Morales, M.; Mehana, M.; Rabiei, M. A critical review of physics-informed machine learning applications
in subsurface energy systems. Geoenergy Sci. Eng. 2024, 239, 212938. [CrossRef]
38. Huang, B.; Wang, J. Applications of physics-informed neural networks in power systems-a review. IEEE Trans. Power Syst. 2022,
38, 572–588. [CrossRef]
39. Pateras, J.; Rana, P.; Ghosh, P. A taxonomic survey of physics-informed machine learning. Appl. Sci. 2023, 13, 6892. [CrossRef]
40. Yang, Y.; Perdikaris, P. Adversarial uncertainty quantification in physics-informed neural networks. J. Comput. Phys. 2019, 394,
136–152. [CrossRef]
41. Zhu, Y.; Zabaras, N.; Koutsourelakis, P.S.; Perdikaris, P. Physics-constrained deep learning for high-dimensional surrogate
modeling and uncertainty quantification without labeled data. J. Comput. Phys. 2019, 394, 56–81. [CrossRef]
42. Sun, L.; Gao, H.; Pan, S.; Wang, J.X. Surrogate modeling for fluid flows based on physics-constrained deep learning without
simulation data. Comput. Methods Appl. Mech. Eng. 2020, 361, 112732. [CrossRef]
43. Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conserva-
tion laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [CrossRef]
44. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Inferring solutions of differential equations using noisy multi-fidelity data. J. Comput.
Phys. 2017, 335, 736–746. [CrossRef]
45. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Machine learning of linear differential equations using Gaussian processes. J. Comput.
Phys. 2017, 348, 683–693. [CrossRef]
46. Raissi, M.; Karniadakis, G.E. Hidden physics models: Machine learning of nonlinear partial differential equations. J. Comput.
Phys. 2018, 357, 125–141. [CrossRef]
47. Di Lorenzo, D.; Champaney, V.; Marzin, J.; Farhat, C.; Chinesta, F. Physics informed and data-based augmented learning in
structural health diagnosis. Comput. Methods Appl. Mech. Eng. 2023, 414, 116186. [CrossRef]
48. Emmert-Streib, F.; Yang, Z.; Feng, H.; Tripathi, S.; Dehmer, M. An introductory review of deep learning for prediction models
with big data. Front. Artif. Intell. 2020, 3, 4. [CrossRef]
49. Agatonovic-Kustrin, S.; Beresford, R. Basic concepts of artificial neural network (ANN) modeling and its application in
pharmaceutical research. J. Pharm. Biomed. Anal. 2000, 22, 717–727. [CrossRef] [PubMed]
50. Dongare, A.; Kharde, R.; Kachare, A.D. Introduction to artificial neural network. Int. J. Eng. Innov. Technol. (IJEIT) 2012, 2,
189–194.
51. Zou, J.; Han, Y.; So, S.S. Overview of artificial neural networks. In Artificial Neural Networks: Methods and Applications; Springer:
Berlin/Heidelberg, Germany, 2009; pp. 14–22.._2. [CrossRef]
52. Mehrotra, K.; Mohan, C.K.; Ranka, S. Elements of Artificial Neural Networks; MIT Press: Cambridge, MA, USA, 1997.
53. Daw, A.; Karpatne, A.; Watkins, W.D.; Read, J.S.; Kumar, V. Physics-guided neural networks (pgnn): An application in lake
temperature modeling. In Knowledge Guided Machine Learning; Chapman and Hall/CRC: Boca Raton, FL, USA, 2022; pp. 353–372.
AI 2024, 5 1554
54. Zhang, R.; Tao, H.; Wu, L.; Guan, Y. Transfer learning with neural networks for bearing fault diagnosis in changing working
conditions. IEEE Access 2017, 5, 14347–14357. [CrossRef]
55. Pfrommer, J.; Zimmerling, C.; Liu, J.; Kärger, L.; Henning, F.; Beyerer, J. Optimisation of manufacturing process parameters using
deep neural networks as surrogate models. Procedia CiRP 2018, 72, 426–431. [CrossRef]
56. Chao, M.A.; Kulkarni, C.; Goebel, K.; Fink, O. Fusing physics-based and deep learning models for prognostics. Reliab. Eng. Syst.
Saf. 2022, 217, 107961. [CrossRef]
57. Wang, F.; Zhang, Q.J. Knowledge-based neural models for microwave design. IEEE Trans. Microw. Theory Tech. 1997, 45,
2333–2343. [CrossRef]
58. Yuan, F.G.; Zargar, S.A.; Chen, Q.; Wang, S. Machine learning for structural health monitoring: Challenges and opportunities.
Sens. Smart Struct. Technol. Civ. Mech. Aerosp. Syst. 2020 2020, 11379, 1137903.
59. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE
Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [CrossRef] [PubMed]
60. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in
convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [CrossRef]
61. Sallam, A.; Gaid, A.S.; Saif, W.Q.; Hana’a, A.; Abdulkareem, R.A.; Ahmed, K.J.; Saeed, A.Y.; Radman, A. Early detection of
glaucoma using transfer learning from pre-trained cnn models. In Proceedings of the IEEE 2021 International Conference of
Technology, Science and Administration (ICTSA), Taiz, Yemen, 22–24 March 2021; pp. 1–5.
62. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing.
ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [CrossRef]
63. Xiang, Q.; Wang, X.; Lai, J.; Lei, L.; Song, Y.; He, J.; Li, R. Quadruplet depth-wise separable fusion convolution neural network for
ballistic target recognition with limited samples. Expert Syst. Appl. 2024, 235, 121182. [CrossRef]
64. Sadoughi, M.; Hu, C. Physics-based convolutional neural network for fault diagnosis of rolling element bearings. IEEE Sens. J.
2019, 19, 4181–4192. [CrossRef]
65. De Bézenac, E.; Pajot, A.; Gallinari, P. Deep learning for physical processes: Incorporating prior scientific knowledge. J. Stat.
Mech. Theory Exp. 2019, 2019, 124009. [CrossRef]
66. Zhang, R.; Liu, Y.; Sun, H. Physics-guided convolutional neural network (PhyCNN) for data-driven seismic response modeling.
Eng. Struct. 2020, 215, 110704. [CrossRef]
67. Cohen, T.; Welling, M. Group equivariant convolutional networks. In Proceedings of the International Conference on Machine
Learning, PMLR, New York, NY, USA, 19–24 June 2016; pp. 2990–2999.
68. Dieleman, S.; De Fauw, J.; Kavukcuoglu, K. Exploiting cyclic symmetry in convolutional neural networks. In Proceedings of the
International Conference on Machine Learning, PMLR, New York, NY, USA, 19–24 June 2016; pp. 1889–1898.
69. Jia, X.; Willard, J.; Karpatne, A.; Read, J.; Zwart, J.; Steinbach, M.; Kumar, V. Physics guided RNNs for modeling dynamical
systems: A case study in simulating lake temperature profiles. In Proceedings of the 2019 SIAM International Conference on
Data Mining, SIAM, Calgary, AB, Canada, 2–4 May 2019; pp. 558–566.
70. Worrall, D.E.; Garbin, S.J.; Turmukhambetov, D.; Brostow, G.J. Harmonic networks: Deep translation and rotation equivariance.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017;
pp. 5028–5037.
71. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D
Nonlinear Phenom. 2020, 404, 132306. [CrossRef]
72. Sutskever, I.; Martens, J.; Hinton, G.E. Generating text with recurrent neural networks. In Proceedings of the 28th International
Conference on Machine Learning (ICML-11), Bellevue, WA, USA, 28 June–2 July 2011; pp. 1017–1024.
73. Medsker, L.R.; Jain, L. Recurrent neural networks. Des. Appl. 2001, 5, 2.
74. Pearlmutter. Learning state space trajectories in recurrent neural networks. In Proceedings of the IEEE International 1989 Joint
Conference on Neural Networks, Washington, DC, USA, 18–22 June 1989; pp. 365–372.
75. Mandic, D.P.; Chambers, J. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability; John Wiley &
Sons, Inc.: Hoboken, NJ, USA, 2001.
76. Das, S.; Tariq, A.; Santos, T.; Kantareddy, S.S.; Banerjee, I. Recurrent neural networks (RNNs): Architectures, training tricks,
and introduction to influential research. In Machine Learning for Brain Disorders; Springer: Berlin/Heidelberg, Germany, 2023;
pp. 117–138.
77. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput.
2019, 31, 1235–1270. [CrossRef] [PubMed]
78. Nascimento, R.G.; Viana, F.A. Fleet prognosis with physics-informed recurrent neural networks. arXiv 2019, arXiv:1901.05512.
79. Dourado, A.; Viana, F.A. Physics-informed neural networks for corrosion-fatigue prognosis. In Proceedings of the Annual
Conference of the PHM Society, Paris, France, 2–5 May 2019; Volume 11.
80. Dourado, A.D.; Viana, F. Physics-informed neural networks for bias compensation in corrosion-fatigue. In Proceedings of the
Aiaa Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020; p. 1149.
81. Long, Y.; She, X.; Mukhopadhyay, S. Hybridnet: Integrating model-based and data-driven learning to predict evolution of
dynamical systems. In Proceedings of the Conference on Robot Learning, PMLR, Zürich, Switzerland, 29–31 October 2018;
pp. 551–560.
AI 2024, 5 1555
82. Yu, Y.; Yao, H.; Liu, Y. Structural dynamics simulation using a novel physics-guided machine learning method. Eng. Appl. Artif.
Intell. 2020, 96, 103947. [CrossRef]
83. Lutter, M.; Ritter, C.; Peters, J. Deep lagrangian networks: Using physics as model prior for deep learning. arXiv 2019,
arXiv:1907.04490.
84. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A comprehensive survey on graph neural networks. IEEE Trans. Neural
Netw. Learn. Syst. 2020, 32, 4–24. [CrossRef] [PubMed]
85. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and
applications. AI Open 2020, 1, 57–81. [CrossRef]
86. Zhou, Y.; Zheng, H.; Huang, X.; Hao, S.; Li, D.; Zhao, J. Graph neural networks: Taxonomy, advances, and trends. ACM Trans.
Intell. Syst. Technol. (TIST) 2022, 13, 1–54. [CrossRef]
87. Veličković, P. Everything is connected: Graph neural networks. Curr. Opin. Struct. Biol. 2023, 79, 102538. [CrossRef]
88. Ortega, A.; Frossard, P.; Kovačević, J.; Moura, J.M.; Vandergheynst, P. Graph signal processing: Overview, challenges, and
applications. Proc. IEEE 2018, 106, 808–828. [CrossRef]
89. Seo, S.; Meng, C.; Liu, Y. Physics-aware difference graph networks for sparsely-observed dynamics. In Proceedings of the
International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019.
90. Seo, S.; Liu, Y. Differentiable physics-informed graph networks. arXiv 2019, arXiv:1902.02950.
91. Zhang, G.; He, H.; Katabi, D. Circuit-GNN: Graph neural networks for distributed circuit design. In Proceedings of the
International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 7364–7373.
92. Mojallal, A.; Lotfifard, S. Multi-physics graphical model-based fault detection and isolation in wind turbines. IEEE Trans. Smart
Grid 2017, 9, 5599–5612. [CrossRef]
93. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need.
Adv. Neural Inf. Process. Syst. 2017, 30. Available online: https://api.semanticscholar.org/CorpusID:13756489 (accessed on 18
August 2024).
94. Knyazev, B.; Taylor, G.W.; Amer, M. Understanding attention and generalization in graph neural networks. Adv. Neural Inf.
Process. Syst. 2019, 32.
95. McClenny, L.; Braga-Neto, U. Self-adaptive physics-informed neural networks using a soft attention mechanism. arXiv 2020,
arXiv:2009.04544.
96. Rodriguez-Torrado, R.; Ruiz, P.; Cueto-Felgueroso, L.; Green, M.C.; Friesen, T.; Matringe, S.; Togelius, J. Physics-informed
attention-based neural network for solving non-linear partial differential equations. arXiv 2021, arXiv:2105.07898.
97. Rodriguez-Torrado, R.; Ruiz, P.; Cueto-Felgueroso, L.; Green, M.C.; Friesen, T.; Matringe, S.; Togelius, J. Physics-informed
attention-based neural network for hyperbolic partial differential equations: Application to the Buckley–Leverett problem. Sci.
Rep. 2022, 12, 7557. [CrossRef]
98. Jeddi, A.B.; Shafieezadeh, A. A physics-informed graph attention-based approach for power flow analysis. In Proceedings of the
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), Pasadena, CA, USA, 13–16 December
2021; pp. 1634–1640.
99. Altaheri, H.; Muhammad, G.; Alsulaiman, M. Physics-informed attention temporal convolutional network for EEG-based motor
imagery classification. IEEE Trans. Ind. Inform. 2022, 19, 2249–2258. [CrossRef]
100. Che, T.; Liu, X.; Li, S.; Ge, Y.; Zhang, R.; Xiong, C.; Bengio, Y. Deep verifier networks: Verification of deep discriminative models
with deep generative models. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021;
Volume 35, pp. 7002–7010.
101. Salakhutdinov, R. Learning deep generative models. Annu. Rev. Stat. Its Appl. 2015, 2, 361–385. [CrossRef]
102. Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114.
103. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial
nets. Adv. Neural Inf. Process. Syst. 2014, 27.
104. Warner, J.E.; Cuevas, J.; Bomarito, G.F.; Leser, P.E.; Leser, W.P. Inverse estimation of elastic modulus using physics-informed
generative adversarial networks. arXiv 2020, arXiv:2006.05791.
105. Yang, L.; Zhang, D.; Karniadakis, G.E. Physics-informed generative adversarial networks for stochastic differential equations.
SIAM J. Sci. Comput. 2020, 42, A292–A317. [CrossRef]
106. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. Adv. Neural Inf.
Process. Syst. 2017, 30.
107. Yang, Y.; Perdikaris, P. Physics-informed deep generative models. arXiv 2018, arXiv:1812.03511.
108. Hammoud, M.A.E.R.; Titi, E.S.; Hoteit, I.; Knio, O. CDAnet: A Physics-Informed Deep Neural Network for Downscaling Fluid
Flows. J. Adv. Model. Earth Syst. 2022, 14, e2022MS003051. [CrossRef]
109. Psaros, A.F.; Kawaguchi, K.; Karniadakis, G.E. Meta-learning PINN loss functions. J. Comput. Phys. 2022, 458, 111121. [CrossRef]
110. Soto, Á.M.; Cervantes, A.; Soler, M. Physics-informed neural networks for high-resolution weather reconstruction from sparse
weather stations. Open Res. Eur. 2024, 4, 99. [CrossRef]
111. Davini, D.; Samineni, B.; Thomas, B.; Tran, A.H.; Zhu, C.; Ha, K.; Dasika, G.; White, L. Using physics-informed regularization to
improve extrapolation capabilities of neural networks. In Proceedings of the Fourth Workshop on Machine Learning and the
Physical Sciences (NeurIPS 2021), Vancouver, BC, Canada, 13 December 2021.
AI 2024, 5 1556
112. He, Q.; Barajas-Solano, D.; Tartakovsky, G.; Tartakovsky, A.M. Physics-informed neural networks for multiphysics data
assimilation with application to subsurface transport. Adv. Water Resour. 2020, 141, 103610. [CrossRef]
113. Owhadi, H. Bayesian numerical homogenization. Multiscale Model. Simul. 2015, 13, 812–828. [CrossRef]
114. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations.
Science 2020, 367, 1026–1030. [CrossRef]
115. Meng, X.; Karniadakis, G.E. A composite neural network that learns from multi-fidelity data: Application to function approxima-
tion and inverse PDE problems. J. Comput. Phys. 2020, 401, 109020. [CrossRef]
116. Yang, L.; Meng, X.; Karniadakis, G.E. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE
problems with noisy data. J. Comput. Phys. 2021, 425, 109913. [CrossRef]
117. Neal, R.M. Bayesian Learning for Neural Networks; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012;
Volume 118.
118. Neal, R.M. MCMC using Hamiltonian dynamics. Handb. Markov Chain Monte Carlo 2011, 2, 2.
119. Graves, A. Practical variational inference for neural networks. Adv. Neural Inf. Process. Syst. 2011, 24.
120. Rezende, D.; Mohamed, S. Variational inference with normalizing flows. In Proceedings of the International Conference on
Machine Learning, PMLR, Lille, France, 7–9 July 2015; pp. 1530–1538.
121. Yucesan, Y.A.; Viana, F.A. A physics-informed neural network for wind turbine main bearing fatigue. Int. J. Progn. Health Manag.
2020, 11. [CrossRef]
122. Fedorov, A.; Perechodjuk, A.; Linke, D. Kinetics-constrained neural ordinary differential equations: Artificial neural network
models tailored for small data to boost kinetic model development. Chem. Eng. J. 2023, 477, 146869. [CrossRef]
123. Pang, H.; Wu, L.; Liu, J.; Liu, X.; Liu, K. Physics-informed neural network approach for heat generation rate estimation of
lithium-ion battery under various driving conditions. J. Energy Chem. 2023, 78, 1–12. [CrossRef]
124. Wang, Y.; Xiong, C.; Wang, Y.; Xu, P.; Ju, C.; Shi, J.; Yang, G.; Chu, J. Temperature state prediction for lithium-ion batteries based
on improved physics informed neural networks. J. Energy Storage 2023, 73, 108863. [CrossRef]
125. Zhang, Z.; Zou, Z.; Kuhl, E.; Karniadakis, G.E. Discovering a reaction–diffusion model for Alzheimer’s disease by combining
PINNs with symbolic regression. Comput. Methods Appl. Mech. Eng. 2024, 419, 116647. [CrossRef]
126. Xiang, Z.; Peng, W.; Zhou, W.; Yao, W. Hybrid finite difference with the physics-informed neural network for solving PDE in
complex geometries. arXiv 2022, arXiv:2202.07926.
127. Hou, J.; Li, Y.; Ying, S. Enhancing PINNs for solving PDEs via adaptive collocation point movement and adaptive loss weighting.
Nonlinear Dyn. 2023, 111, 15233–15261. [CrossRef]
128. Sun, J.; Liu, Y.; Wang, Y.; Yao, Z.; Zheng, X. BINN: A deep learning approach for computational mechanics problems based on
boundary integral equations. Comput. Methods Appl. Mech. Eng. 2023, 410, 116012. [CrossRef]
129. Gao, H.; Zahr, M.J.; Wang, J.X. Physics-informed graph neural Galerkin networks: A unified framework for solving PDE-governed
forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2022, 390, 114502. [CrossRef]
130. Chen, H.; Wu, R.; Grinspun, E.; Zheng, C.; Chen, P.Y. Implicit neural spatial representations for time-dependent PDEs. In
Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 5162–5177.
131. Lu, L.; Jin, P.; Karniadakis, G.E. Deeponet: Learning nonlinear operators for identifying differential equations based on the
universal approximation theorem of operators. arXiv 2019, arXiv:1910.03193.
132. Pun, G.P.; Batra, R.; Ramprasad, R.; Mishin, Y. Physically informed artificial neural networks for atomistic modeling of materials.
Nat. Commun. 2019, 10, 2339. [CrossRef] [PubMed]
133. Mishin, Y. Machine-learning interatomic potentials for materials science. Acta Mater. 2021, 214, 116980. [CrossRef]
134. Sagingalieva, A.; Kordzanganeh, M.; Kenbayev, N.; Kosichkina, D.; Tomashuk, T.; Melnikov, A. Hybrid quantum neural network
for drug response prediction. Cancers 2023, 15, 2705. [CrossRef]
135. Shin, J.; Piao, Y.; Bang, D.; Kim, S.; Jo, K. DRPreter: Interpretable anticancer drug response prediction using knowledge-guided
graph neural networks and transformer. Int. J. Mol. Sci. 2022, 23, 13919. [CrossRef]
136. Zhang, E.; Dao, M.; Karniadakis, G.E.; Suresh, S. Analyses of internal structures and defects in materials using physics-informed
neural networks. Sci. Adv. 2022, 8, eabk0644. [CrossRef]
137. Bolandi, H.; Sreekumar, G.; Li, X.; Lajnef, N.; Boddeti, V.N. Physics informed neural network for dynamic stress prediction. Appl.
Intell. 2023, 53, 26313–26328. [CrossRef]
138. Zhu, Q.; Liu, Z.; Yan, J. Machine learning for metal additive manufacturing: Predicting temperature and melt pool fluid dynamics
using physics-informed neural networks. Comput. Mech. 2021, 67, 619–635. [CrossRef]
139. Chaffart, D.; Yuan, Y.; Ricardez-Sandoval, L.A. Multiscale Physics-Informed Neural Network Framework to Capture Stochastic
Thin-Film Deposition. J. Phys. Chem. C 2024, 128, 3733–3750. [CrossRef]
140. Desai, S.; Mattheakis, M.; Joy, H.; Protopapas, P.; Roberts, S. One-shot transfer learning of physics-informed neural networks.
arXiv 2021, arXiv:2110.11286.
141. Jin, H.; Mattheakis, M.; Protopapas, P. Physics-informed neural networks for quantum eigenvalue problems. In Proceedings of
the IEEE 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8.
142. Meray, A.; Wang, L.; Kurihana, T.; Mastilovic, I.; Praveen, S.; Xu, Z.; Memarzadeh, M.; Lavin, A.; Wainwright, H. Physics-informed
surrogate modeling for supporting climate resilience at groundwater contamination sites. Comput. Geosci. 2024, 183, 105508.
[CrossRef]
AI 2024, 5 1557
143. Waheed, U.B.; Alkhalifah, T.; Haghighat, E.; Song, C.; Virieux, J. PINNtomo: Seismic tomography using physics-informed neural
networks. arXiv 2021, arXiv:2104.01588.
144. Lakshminarayana, S.; Sthapit, S.; Maple, C. Application of physics-informed machine learning techniques for power grid
parameter estimation. Sustainability 2022, 14, 2051. [CrossRef]
145. Rodrigues, J.A. Using Physics-Informed Neural Networks (PINNs) for Tumor Cell Growth Modeling. Mathematics 2024, 12, 1195.
[CrossRef]
146. Raeisi, E.; Yavuz, M.; Khosravifarsani, M.; Fadaei, Y. Mathematical modeling of interactions between colon cancer and immune
system with a deep learning algorithm. Eur. Phys. J. Plus 2024, 139, 1–16. [CrossRef]
147. Arzani, A.; Wang, J.X.; Sacks, M.S.; Shadden, S.C. Machine learning for cardiovascular biomechanics modeling: Challenges and
beyond. Ann. Biomed. Eng. 2022, 50, 615–627. [CrossRef]
148. Perez-Raya, I.; Kandlikar, S.G. Thermal modeling of patient-specific breast cancer with physics-based artificial intelligence.
ASME J. Heat Mass Transf. 2023, 145, 031201. [CrossRef]
149. Semeraro, C.; Lezoche, M.; Panetto, H.; Dassisti, M. Digital twin paradigm: A systematic literature review. Comput. Ind. 2021,
130, 103469. [CrossRef]
150. Emmert-Streib, F. Defining a digital twin: A data science-based unification. Mach. Learn. Knowl. Extr. 2023, 5, 1036–1054.
[CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.