0% found this document useful (0 votes)
24 views

Vadyala_2021_A review of physics based machine learning in civil engineering

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Vadyala_2021_A review of physics based machine learning in civil engineering

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

A Review of Physics-based Machine Learning in Civil Engineering

Shashank Reddy Vadyala1; Sai Nethra Betgeri1; Dr. John C. Matthews2; Dr. Elizabeth Matthews3

1. Department of Computational Analysis and Modeling, Louisiana Tech University, Ruston,


Louisiana, United States.
2. Director, TTC, Louisiana Tech University, Ruston, Louisiana, United States.
3. Assistant Professor, Civil Engineering, Louisiana Tech University, Ruston, Louisiana,
United States.

Abstract:

The recent development of machine learning (ML) and Deep Learning (DL) increases the
opportunities in all the sectors. ML is a significant tool that can be applied across many disciplines,
but its direct application to civil engineering problems can be challenging. ML for civil engineering
applications that are simulated in the lab often fail in real-world tests. This is usually attributed to
a data mismatch between the data used to train and test the ML model and the data it encounters
in the real world, a phenomenon known as data shift. However, a physics-based ML model
integrates data, partial differential equations (PDEs), and mathematical models to solve data shift
problems. Physics-based ML models are trained to solve supervised learning tasks while
respecting any given laws of physics described by general nonlinear equations. Physics-based ML,
which takes center stage across many science disciplines, plays an important role in fluid
dynamics, quantum mechanics, computational resources, and data storage. This paper reviews the
history of physics-based ML and its application in civil engineering.

Keywords: Physics-based machine learning, Machine Learning, Deep neural network, Civil
engineering
1. Introduction

ML and DL, e.g., deep neural networks (DNNs), are becoming increasingly prevalent in the
scientific process, replacing traditional statistical methods and mechanistic models in various
commercial applications and fields, including natural science, engineering, and social science. ML
is also applied in civil engineering, where mechanistic models have traditionally dominated [1-4].
Despite its wide adoption, researchers and other end users often criticize ML methods as a “black
box,” meaning they are thought to take inputs and provide outputs but not yield physically
interpretable information to the user[5]. As a result, some scientists have developed physics-based
ML to reckon with widespread concern about the opacity of black-box models [6-9].

The civil engineering ML models are created directly from data by an algorithm; even researchers
who design them cannot understand how variables are combined to make predictions. Even with
a list of input variables, black-box predictive ML models can be such complex functions that no
researchers can understand how the variables are connected to arrive at a final prediction. For
example, ML models that fail to estimate structural damage are tied to processes that are not
entirely understood have difficulty providing high data needs. Hence, their high data needs,
difficulty providing physically consistent findings, and lack generalizability to out-of-sample
scenarios [8]. Large, curated data sets with well-defined, precisely labeled categories are used to
test ML and DL models. DL does well for these problems because it assumes a largely stable
world. But in the real world, these categories are constantly evolving, specifically in civil
engineering. Only after extensive testing on ML responses to various visual stimuli we can
discover the problem.

Physics-based numerical simulations have become indispensable in civil engineering applications,


such as seismic risk mitigation, irrigation management, structural design and analysis, and
structural health monitoring. Civil engineers and scientists may now utilize sophisticated models
for real-world applications, with ultra-realistic simulations involving millions of degrees of
freedom, thanks to the advancement of high-performance computers. However, in the civil
engineering sector, such simulations are too time-consuming to be incorporated fully into an
iterative design process. They are often restricted to the final validation and certification stages,
while most design processes rely on simpler models. Accelerating complex simulations is an
important problem to address since it would make it easier to apply numerical tools throughout the
design process. The development of numerical methods for rapid simulations would also enable
novel model applications such as improving construction productivity, which has yet to be fully
utilized due to model complexity. Uncertainty quantification is another critical example of analysis
that might be feasible if simulation costs were lowered substantially. Indeed, the physical system
environment, which is generally unknown, affects the values of interest monitored in numerical
simulations. In some situations, these uncertainties significantly impact simulation results,
necessitating estimating probability distributions for the quantities of interest to assure the
product's dependability. Neither an ML-only nor a scientific knowledge-only method can be
considered sufficient for complicated scientific and technical applications. Researchers are
beginning to investigate the continuum between mechanistic and ML models, synergizing
scientific knowledge and data.

There have been several reviews on ML civil engineering. However, limited studies have been
conducted on physics-based ML and synthesizing a road map for guiding subsequent research to
advance the proper use of physics-based ML in civil engineering applications. Furthermore, there
are few works focused on the fundamental physics-based ML models in civil engineering. This
study investigates a more profound connection of ML methods with physics models. Even though
the notion of combining scientific principles with ML models has only recently gained traction[8],
there has already been a significant amount of research done on the subject. Researchers focus on
physics models, ML models, and application scenarios to solve their problems in civil engineering.
This study aims to bring these exciting developments to the ML community and make them aware
of the progress completed and the gaps and opportunities for advancing research in this promising
direction.

Basics of Neural Network and Physics-Based Machine Learning

Neural Networks (NNs) is a ML approach for expressing a form's input-output relationship, shown
in Equation (1)
(1)
𝒀 = 𝒀𝑵𝑵 = 𝑾𝑻 ∅𝒉 (𝐁𝑻 𝒙
̅) + 𝜼
where 𝑥̅ = [𝑥; 1], y is the target (output) variable, while x is the input variable, 𝑌 𝑁𝑁 is the predicted
output variable obtained from NNs. The input variable's activation function is ∅ℎ , the transition
weight matrix is U, the output weight matrix is W, and W is an unknown error owing to
measurement or modeling mistakes are W. Within the weight matrices, the bias terms are defined
by supplementing the input variable x with a unit value in the present notations. In Equation (1),
the target variable is a linear combination of certain basic functions parametrized by B. A neural
network design with depth K layers is defined in Equation (2):
(2)
𝒀 ≈ 𝒀𝑵𝑵 = 𝑾𝑻 ∅𝒌−𝟏 ( ∅𝒌−𝟐 (. . . ∅𝟏 (𝑩𝑻𝟏 𝒙
̅)))

where ∅𝑘 and 𝑈𝑘 are the element-wise nonlinear function and the weight matrix for the
𝐾 𝑡ℎ layer and W is the output weight matrix as shown in Figure1.

Figure 1 A simple neural network with two hidden layers

Physics-based ML can combine the knowledge we already know, such as physics-based forward
and optimization algorithms. The mechanics of training a physics-based network are like training
any NNs as shown in Figure 2. It relies on a dataset, an optimizer, and automatic differentiation
[10] to compute gradients. The encoding of information, x, into measurements, y, is given by
Equation (3).

Figure 2. Physics-based ML models are trained to minimize a cost function comprised of equation residual and data
over space and time.

𝒚 = 𝝕(𝒙) + 𝜾 (3)

where 𝜛 describes the forward model process that characterizes the formation of measurements
and 𝜄 is random system noise. The image reconstruction from a set of measures, i.e., decoding, can
be structured using an inverse problem formulation shown in Equation (4).

𝒙∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏 𝟐
𝒙 ||𝝕(𝒙) − 𝒚|| + ℸ(𝒙) (4)

where x is the sought information, ||𝜛(𝑥) − 𝑦||2 is ℬ(𝑥; 𝑦), ℬ(.) is the data consistency penalty
(commonly ℓ2 distance between the measurements and the estimated measurements), and ℸ(. ) is
the signal prior (e.g., sparsity, total variation). This optimization problem often requires a nonlinear
and iterative solver. In a nonlinear signal prior, proximal gradient descent can efficiently solve the
optimization issue[11]. When numerous constraints are imposed on the picture reconstruction, and
the forward model process is linear methods such as the Alternating Direction Method of
Multipliers (ADMM) [12] and Half-Quadratic Splitting (HQS)[13] can be efficient solutions. The
physics-based network (Figure 3) is created by unrolling N optimization algorithm iterations into
network layers. The measurements and initialization are sent into the network, and the output is an
estimate of the information after N iterations of the optimizer.
Figure 3. N unrolled decoder iterations make up an unrolled physics-based network. A data consistency update (e.g.,
gradient update) and a previous update are included in each layer (e.g., proximal update). The network's inputs are
the measurements, y, and initialization, 𝑥 (0) and the network's output is the final estimate, 𝑥 (𝑁) ,. Finally, the final
estimate is put into a training loss function.

2. Reduced-order models

Computational Mechanics is a branch of research that needs significant processing power to


provide correct results [14]. It nearly always uses a geometric mesh, and the coarseness of the
mesh is proportional to the time it takes for a simulation to converge[14]. As a result, it has the
potential to grow to such proportions that methods for reducing its order must be developed. These
methods aim to create a Reduced-Order Model (ROM) that can effectively replace its heavier
counterpart for tasks such as design and optimization, as well as real-time predictions, which all
require the model to run many times, which is typically impossible due to a lack of adequate and
available computer resources[15]. They capture the behavior of these source models so that civil
engineers can quickly study a system’s dominant effects using minimal computational resources.
ROMs have become popular in civil engineering because engineers face market demands for
shorter design cycles that produce higher-quality products and structures. ROMs can be used in
civil engineering to simplify various models from full 3D simulations of systems. As a result, civil
engineers can use them to optimize structure designs and create more extensive structural
simulations

3. Proper Orthogonal Decomposition


Partial differential equations (PDEs) are solved using the singular value decomposition (SVD)
method. The Proper Orthogonal Decomposition (POD) [16, 17] is the SVD algorithm. It's one of
the most effective dimensionality reduction techniques for studying complicated spatiotemporal
systems[18]. Despite its introduction a few decades ago, POD-based ROM is still state-of-the-art
in model order reduction, especially when coupled with Galerkin projection [19]. The dominating
spatial subspaces are extracted from a dataset using POD. Put another way, POD calculates the
prevailing coherent paths in an infinite space that best describe a system's spatial evolution. Thus,
the SVD or eigenvalue decomposition of a snapshot matrix is both strongly connected to POD-
ROM. Figure 4 illustrates the evolution of ROM methods.

Figure 4. Evolution of the ROM methods

3.1. Reduced basis through Proper Orthogonal Decomposition


According to [19], a popular technique to create a ROM is to compress it into a smaller area,
described by a Reduced Basis set (RB). For the most part, RB techniques follow an offline-online
paradigm, with the first being more computationally intensive and the second being quick enough
to allow for real-time predictions. The idea is to collect data points from simulation, or any high-
fidelity source, called snapshots and stored in an ensemble {𝑢𝑖 }, and extract the information that
has a broader impact on the system's dynamics, the modes, via a reduction method in the offline
stage. According to [16, 17] aims at finding a basis function 𝜑(𝑘) in a Hilbert space ℋ possessing
the structure of an inner product (·,·) that would optimally represent the field 𝑢. Supposing that
solutions or high-fidelity measures of these solutions are available as {𝑢𝑖 }, each belonging to the
same space ℋ, so that the field can be approximated in Equation (5).

(5)
𝒖 = ∑ 𝝊(𝒌) 𝝋(𝒌)
𝒌=𝟏

The Hilbert space for scalar or complex-valued functions is ℋ = 𝐿2 (Θ), where a space vector x in
the domain Θ is considered, as well as the time variable t, which has an inner product defined by

(𝑓, 𝑔) = ∫𝛩 𝑓𝑖 𝑔𝑖Τ 𝑑𝑥 (T denotes conjugate transpose). As a result of the summing, variable


separation would be possible, as shown in Equation (6).

(6)
𝒖(𝒙, 𝒕) = ∑ 𝝊(𝒌) (𝒕)𝝋(𝒌) (𝒙)
𝒌=𝟏

Considering the mean <∙> though as "an average over several separate experiments," e.g., in the
1
case of a function 𝑓 with 𝑁𝑟 realizations 𝑓𝑖 , < 𝑓 >= 𝑁 𝛴𝑖 𝑓𝑖 , the absolute value | · |, and the 2-
𝑟

1/2
norm || · || defined as ||𝑓|| = (𝑓, 𝑓) , each normalized optimal basis 𝜑 is sought after and shown
in Equation (7).

𝒎𝒂𝒙
< |(𝒖, 𝛗)|𝟐 > (7)
𝛗𝝐𝓗.
||𝛗||𝟐

It may be demonstrated to be equal to solving the eigenvalues 𝜉 problem using a condition on


variations calculus is shown in Equation (8).

(8)
⋇ ′ ′ ′
∫ 𝒖(𝒙, 𝒕)𝒖 (𝒙 , 𝒕)𝝋(𝒙 ) 𝒅𝒙 = 𝝃𝝋(𝒙)
𝜣
The SVD technique converts these continuous expressions to a low-rank approximation in most
cases[20]. This method is very comparable to [21] statistical methodology of Principal Component
Analysis(PCA), which was evaluated lately [22]. Moreover, this technique may produce a realistic
approximation by truncating the sum in Equation.4 to a finite length L, which was originally shown
by Sirovich (1987) and represented as shown in Equation (9).
𝑳 (9)
𝑷𝑶𝑫
𝒖 (𝒙, 𝒕) = ∑ 𝝊(𝒌) (𝒕)𝝋(𝒌) (𝒙)
𝒌=𝟏

Next, the online stage entails retrieving the expansion coefficients and projecting them into our
uncompressed, real-life space. Again, the distinction between intrusive and nonintrusive
approaches becomes apparent here. The first employs strategies tailored to the problem's
formulations, whereas the latter attempts to infer the mapping statistically by treating the snapshots
as a dataset.

3.2. Intrusive reduced-order methods using the Galerkin procedure.

The Galerkin process is the traditional way to handle the second half of the POD approach to
reduced-order modeling, as described and modified [23]. For example, consider the following
PDEs, defined by the nonlinear operator 𝒩 (Normal distribution) and have the x and t subscripts
indicating the associated derivatives shown in Equation (10).

𝒖𝒕 = 𝓝𝒙 𝒖 (10)

The Galerkin technique is used to find each expansion coefficient 𝝊(𝒌) from the L-truncated sum
in Equation 7. A system of solvable equations is generated by reinjecting the estimated
𝒖𝑷𝑶𝑫 within Equation 8 and multiplying by the L POD modes 𝝋, known as Galerkin projection.
For the 𝒑𝒕𝒉 expansion coefficient, with 𝓡 the nonlinear residuals as shown in Equation (11).

𝑳 (11)
(𝒑)
𝒗𝒕 = ∑ 𝝋(𝒑) 𝓝𝒙 𝒖𝑷𝑶𝑫 ≈ 𝓡(𝒑) 𝒖𝑷𝑶𝑫
𝒌=𝟏
In [24], this POD-Galerkin method was used to Shallow Water equations such as dam failure and
flood forecasts. However, since 𝓡 is a generic nonlinear operator, as indicated in these and many
other publications, it is unclear how to achieve any speedup in the offline stage, i.e., solving
Equation 7, Unless 𝓡 is used to make certain approximations. In addition, the reduced basis is
parameter-dependent for parameter-dependent issues requiring several simulations, as is the case
with uncertainty quantification problems. Therefore, the usage of many RB may be necessary, and
finding a way to combine these bases to find an accurate solution is a difficult task[25, 26].

3.3. Nonintrusive reduced-order methods using Polynomial Chaos Expansion

A modeling method must be used to make sense of this snapshot collection and create a surrogate
model to retrieve the projection coefficients accurately. While traditional and easy approaches like
polynomial interpolation appear promising for this job, as pointed out in [27], they struggle to
produce useful results with few samples. A different take has been explored within the Polynomial
Chaos Expansion (PCE) realm, proposed in [28]. Using Hermite polynomials, and more precisely,
a set of multivariate orthonormal polynomials Φ, Wiener's Chaos theory allows for modeling the
outputs as a stochastic process. Considering the previous expansion coefficients 𝑣(𝑘) (𝑡) as a
stochastic process of the variable 𝑡, the PCE is shown in Equation (12).

(𝒌)
𝒗(𝒌) (𝒕) = ∑ 𝑪𝜶 𝜱𝜶 (𝒕) (12)
𝜶𝝐𝑪𝑳

with 𝛼 identifying polynomials following the right criteria in a set 𝑪𝑳 [29]. However, stability
issues may arise, and a new approach using the B-Splines Bézier Elements based Method
(BSBEM) to address this has been developed in [26]. Unfortunately, while it has shown excellent
results, this approach can also suffer from the curse of dimensionality, a term coined half a century
ago [30], that still has significant repercussions nowadays, as shown in [31]. In basic terms, it
means that many well-intentioned techniques work effectively in narrow domains but have
unexpected and unworkable consequences when applied to larger settings.

3.4. Machine Learning ROMs


ML is aiding in the design of ROMs for greater accuracy and cheaper processing costs in various
ways. One way is to create an ML-based surrogate model for full-order models[32], where the ML
model may be thought of as a ROM. Two further approaches are to develop an ML model to
replicate the dimensionality reduction mapping from a full-order model to a reduced-order
model[33] or to generate an ML-based surrogate model of an already built ROM using another
dimensionality reduction technique[34]. ML and ROMs can also be linked by using the ML model
to learn. The residual between observational data and a ROM[35]. The Integration of Physics-
Based Modeling and ML models can significantly increase ROMs' capabilities due to their
typically rapid forward execution speed and ability to use data to simulate high-dimensional
phenomena. The evolution of ROM is seen in Table 1.

Table 1 The evolution of ROM

Year Reference Key Contributions


1915 [36] Galerkin method for solving (initial) boundary value problems
1962 [37] Low dimensional modeling (with 7 modes, see also Ref97 for a revisit)
1963 [38] Low dimensional modeling (with 3 modes)
1967 [39] Proper orthogonal decomposition (POD)
1987 [40] Method of snapshots
1988 [41] First POD model: Dynamics of coherent structures and global eddy viscosity modeling
1994 [42] Linear modal eddy viscosity closure
1995 [43] Gappy POD
2000 [44] Galerkin ROM for optimal ow control problems
2001 [45] Numerical analysis of Galerkin ROM for parabolic problems
2002 [46] Balanced truncation with POD
2003 [47] Guidelines for modeling unresolved modes in POD {Galerkin models
2004 [48] Spectral viscosity closure for POD models
2004 [49] Empirical interpolation method (EIM)
2005 [50] Spectral decomposition of the Koopman operator
2007 [51] Reduced basis approximation.
2007 [52] ROM for four-dimensional variational data assimilation
2008 [53] Interpolation method based on the Grassmann manifold approach.
2008 [54] Missing point estimation
2009 [55] Spectral analysis of nonlinear ows
2010 [56] A purely nonintrusive perspective: Dynamic mode decomposition (DMD)
2010 [57] Discrete empirical interpolation method (DEIM)
2013 [58] The Gauss{Newton with approximated tensors (GNAT) method
2013 [59] Proof of global boundedness of nonlinear eddy viscosity closures
2014 [60] K-scaled eddy viscosity concept
2015 [61] Stabilization of POD Galerkin approximations
2015 [62] On bounded solutions of Galerkin models
2016 [63] Data-driven operator inference nonintrusive ROMs
2016 [64] Spectral POD
2018 [65] On the relationship between spectral POD, DMD, and resolvent analysis
2018 [66] Shifted/transported snapshot POD.
2018 [67] Feature-based manifold modeling
2019 [68] Multi-scale proper orthogonal decomposition
2021 [69] Cluster-based network models

4. Development Physics-based machine learning

Although NNs has been around for a long time, dating back to the [70] perceptron model, they had
to wait for the concepts of backpropagation and automatic differentiation, coined [71] and [72],
respectively, to have a computationally practical way of training their multilayer, less trivial
counterparts. Other types of NNs, such as Recurrent Neural Networks (RNNs) [73] and Long-
Short-Term Memory [74] networks, became popular, allowing for advances in sequencing data.
While the universal approximation power of DNNs in the context of DL had been predicted for a
long time [75], the community had to wait until the early 2010s to finally have both the
computational power and practical tools to train these large networks, thanks to for new
developments like [76] to mention a few, it rapidly led to advances in making sense of and building
upon vast volumes of data. Physics rules are traditionally represented as well-defined PDEs with
Boundary Condition (BC)/Initial Condition (IC) acting as constraints. For example, [ddd-88]
developed novel ways in PDEs discovery using just data-driven methodologies [77] and
anticipated that this new discipline of DL in dynamic systems like Computational Fluid Dynamics
(CFD) would take off (2017)[78]. Its versatility enables various applications, such as missing CFD
data recovery [79] or aerodynamic design optimization [29]. The high expense of a fine mesh was
solved by using an ML technique to analyze mistakes and adjust amounts in a coarser setting [80].
[81] presented a new numerical scheme, the Volume of Fluid-Machine Learning (VOF-ML)
method used in bi-material situations. In addition, research published older research of existing
ML algorithms applied to environmental sciences, especially hydrology [82]. Nonetheless, having
scant and noisy data at our disposal is typical in engineering, but intuitions or expert knowledge
about the underlying physics. It encouraged researchers to consider how to combine the
requirement for data in these approaches with system expertise, such as governing equations, first
detailed in [83, 84], then extended to NNs in [84] with applications on Computational Fluid
Dynamics, as well as in vibrations [85]. A few of these approaches will be explained in detail in
below Sections.

4.1. Physics-Informed Machine Learning


Despite significant progress in simulating multiphysics problems using numerical discretization of
PDEs, it is still impossible to seamlessly incorporate noisy data into existing algorithms, mesh
generation is still difficult, and high-dimensional problems governed by parameterized PDEs are
unsolvable. Furthermore, tackling inverse issues involving hidden physics is frequently
prohibitively costly and necessitates several formulations and complex computer codes. ML has
emerged as a viable option, and however, DNNs training needs large amounts of data, which is
not always accessible for scientific issues. Instead, additional information acquired by enforcing
physical norms may be used to train such networks. This type of physics-informed learning
combines (noisy) data with mathematical models, then implemented using NNs or other kernel-
based regression networks. Furthermore, customized network designs that automatically meet
specific physical invariants for increased accuracy, training speed, and generalization may be
conceivable.
4.2. Encoding physics in Gaussian Processes
A Gaussian process (GP) is a set of random variables with a Gaussian distribution for a finite
number. The mean (𝔼) and covariance functions of a GP define it entirely. The mean function m(x)
and the covariance function 𝑘(𝑥, 𝑥 ′ ) of a real process f(x) are defined by Equations (12) and (13),
respectively.
(12)
𝑚(𝑥) = 𝔼[𝑓(𝑥)]
(13)

𝑘(𝑥, 𝑥 ′ ) = 𝔼[𝑓(𝑥) − 𝑚(𝑥))(𝑓(𝑥 ′ ) − 𝑚(𝑥 ′ ))]


An Equation defines the Gaussian process. (14).

𝑓(𝑥)~𝐺𝑃(𝑚(𝑥), 𝑘(𝑥, 𝑥 ′ ) (14)

"The Gaussian probability distribution is an extension of the Gaussian process. GP are


nonparametric function estimators with a lot of power. However, when the training data are
insufficient to reflect the complexity of the system (generating the data) or the test points are far
distant from the training instances (extrapolation), GP might perform poorly as a data-driven
method. On the other hand, physics information is stated as differential equations and is utilized
to create physical models for various research and engineering applications [86]. These models are
designed to represent the system's underlying mechanism (i.e., physical processes) and are not
constrained by data availability: they can generate accurate predictions even without training data,
[87]. It has a mean (here 0) and a covariance function 𝑘, for instance, the Square Exponential. It
could be thought of as a very long vector containing every function value 𝑦𝑖 = 𝑓(𝑥𝑖 ) defined as,
with 𝑓 ′ representing the test outputs of 𝑥 ′ , not yet observed, demonstrated by Equations (15) and
(16).
(15)

𝑓(𝑥)~𝐺𝑃(0, 𝑘(𝑥, 𝑥 ′ ; 𝜃))


(16)
𝑓 𝑘(𝑥, 𝑥; 𝜃) 𝑘(𝑥, 𝑥 ′ ; 𝜃)
[ ′ ] ~𝒩 (0, [ ])
𝑓 𝑘(𝑥 ′ , 𝑥; 𝜃) 𝑘(𝑥 ′ , 𝑥 ′ ; 𝜃)
And the covariance can be, for instance, Gaussian, shown in Equation (17).
𝑛
𝛼 1 (𝑥𝑑 − 𝑥𝑑′ )2 (17)
𝑘 (𝑥, 𝑥 ; [𝛽 ]) : = 𝛼 2 𝑒𝑥𝑝 (− ∑

)
2 𝛽2 𝑑
𝑑=1

A) Example problem Setup for linear PDEs


Let's now consider time-dependent linear PDEs, as presented in [83]. First, forward Euler gets the
result using the simplest temporal discretization method shown in Equation (18).

𝑢𝑡 = ℒ𝑥 𝑢, 𝑥 𝜖 Ω, 𝑡 𝜖 [0, 𝑇]

𝑢𝑛 = 𝑢𝑛−1 + Δ𝑡ℒ𝑥 𝑢𝑛−1 (18)

and then placing a GP prior shown in Equation (19).

𝑛−1,𝑛−1
𝑢𝑛 (𝑥)~𝐺𝑃(0, 𝑘𝑢,𝑢 (𝑥, 𝑥 ′ , 𝜃)) (19)

As a result, the Euler rule is captured in the following multi-output GP, Equation (20). Table 2
gives a pseudo-code of the steps in PINNs involved.

(20)

𝑛,𝑛 𝑛,𝑛−1
𝑛 𝑘𝑢,𝑢 ⋯ 𝑘𝑢,𝑢
𝑢
[ 𝑛−1 ] ~𝐺𝑃 (0, [ ⋮ ⋱ ⋮ ])
𝑢 𝑛−1,𝑛−1
⋯ 𝑘𝑢,𝑢

Table 2. Implementing a PINNs is straightforward with modern tools


1 Train hyperparameters 𝜃 with initial {𝑥 0 , 𝑢0 } and boundary {𝑥𝑏1 , 𝑢𝑏1 } data.
2 Predict artificial data {𝑥𝑏1 , 𝑢𝑏1 } of the next time-step from the posterior. 24
3 Train new hyperparameters for the time-step 2, using these artificial data {𝑥1 , 𝑢1 } and the
boundary data {𝑥𝑏2 , 𝑢𝑏2 }.
4 Predict new artificial data {𝑥 2 , 𝑢2 } with these new hyperparameters.
5 Repeat 3. and 4. until the final time-step.

B) Example problem setup for nonlinearity PDEs

What if ℒ𝑥 is nonlinear? For example, Burgers' equation is shown in Equation (21).

𝑢𝑡 + 𝑢𝑢𝑥 = 𝜐𝑢𝑥𝑥 with ℒ𝑥 : = 𝜐𝑢𝑥𝑥 − 𝑢𝑢𝑥𝑥 (21)


Applying Backward Euler gives the following Equation (22).

𝑑 𝑛 𝑑2 (22)
𝑢𝑛 = 𝑢𝑛−1 − 𝛥𝑡𝑢𝑛 𝑢 + 𝜐∆𝑡 2 𝑢𝑛
𝑑𝑥 𝑑𝑥

𝑑
Assuming un as a GP will not work here since the nonlinear term 𝑢𝑛 𝑑𝑥 𝑢𝑛 will not result in a GP.

The idea is to utilize the preceding step's posterior mean, 𝜇 𝑛−1 as shown in Equation (23).

𝑑 𝑛 𝑑2 (23)
𝑢𝑛 = 𝑢𝑛−1 − 𝛥𝑡𝜇 𝑛 𝑢 + 𝜐∆𝑡 2 𝑢𝑛
𝑑𝑥 𝑑𝑥

The cubic scaling of computational power with the number of training points due to matrix
inversion when forecasting and the necessity to address nonlinear equations on a case-by-case
basis are limitations of this technique. This has encouraged scientists to investigate DNNs with
built-in nonlinearities.

4.3. Physics-Informed Neural Networks

Modeling physical processes described by PDEs has improved thanks to PINNs significantly. The
behavior of complicated physical systems is learnt by minimizing the residual of the underlying
PDEs by optimizing network settings. PINNs use basic designs to understand the behavior of
complicated physical systems by adjusting network settings to reduce the residual of the
underlying PDEs. As [84] presented, let's consider generic, parametrized nonlinear PDEs shown
in Equation (24).

𝛾
𝑢𝑡 + 𝒩𝑥 𝑢 = 0, 𝑥ϵΩ, 𝑡𝜖[0, 𝑇] (24)

Whether we aim to solve it or identify the parameters 𝛾, the idea of the paper is the same:
approximating 𝑢(𝑡, 𝑥) with DNNs, therefore defining the resulting PINNs network 𝑓 (𝑡, 𝑥) is shown
in Equation (25).

𝛾
𝑓: = 𝑢𝑡 + 𝒩𝑥 𝑢 (25)
Now, we'll derive this network using automated differentiation, a chain-rule-based approach
famously utilized in typical DL settings, eliminating the requirement for numerical or symbolic
differentiation in our situation. Burgers' equation, 1D with Dirichlet IC/BC, is used as a test case
as shown in Equations (26), (27), and (28).

(26)

𝑢𝑡 + 𝑢𝑢𝑥 − (0.01/𝜋)𝑢𝑢𝑥 = 0, 𝑥𝜖[−1,1], 𝑡𝜖[0, 1]

(27)

𝑢(0, 𝑥) = −𝑠𝑖𝑛(𝜋𝑥)

(28)

𝑢(𝑡, −1) = 𝑢(𝑡, 1) = 0

From this can define 𝑓(𝑡, 𝑥), the PINNs is shown in Equation (29).

𝑓: = 𝑢𝑡 + 𝑢𝑢𝑥 − (0.01/𝜋)𝑢𝑢𝑥 (29)

The shared parameters are learned minimizing a custom version of the commonly used Mean
𝑁 𝑁
Squared Error loss, with {𝑡𝑢𝑖 , 𝑥𝑢𝑖 , 𝑢𝑖 }𝑖=1
𝑢
and {𝑡𝑢𝑖 , 𝑥𝑓𝑖 }𝑖=1
𝑓
respectively the IC/BC on 𝑢(𝑡, 𝑥) and
collocations points for 𝑓 (𝑡, 𝑥) is shown in Equation (30).

𝑁𝑢 𝑁𝑓 (30)
1 1
𝑀𝑆𝐸 = ∑ |𝑢(𝑡𝑢𝑖 , 𝑥𝑢𝑖 ) − 𝑢𝑖 |2 + ∑ |𝑓(𝑡𝑓𝑖 , 𝑥𝑓𝑖 )|2
𝑁𝑈 𝑁𝑓
𝑖=1 𝑖=1

Figure 5 shows the overview of PINNs, which were created using [84]. Table 3 offers a pseudo-
code.
Figure 5. Overview of the PINNs

Table 3. Implementing a PINNs is straightforward with modern tools


1 Function u(t,x):
2 𝑢̂ = 𝑁𝑁([𝑥, 𝑡])
3 Return 𝑢̂
4
5 Function f(t,x):
6 𝑢̂ = 𝑢([𝑥, 𝑡])
7 𝑢̂𝑡 = 𝑡𝑓. 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡𝑠(𝑢̂, 𝑡)
8 𝑢̂𝑥 = 𝑡𝑓. 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡𝑠(𝑢̂, 𝑥)
9 𝑢̂𝑥𝑥 = 𝑡𝑓. 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡𝑠(𝑢̂𝑥 , 𝑡)
10 𝑓̂ = 𝑢𝑡 + 𝑢𝑢𝑥 − (0.01/𝜋)𝑢̂𝑢𝑥
11 Return 𝑓̂

The same authors have performed further work, applying the framework to different fields,
including DL of vortex-induced vibrations [85].

4.4. The advantages and disadvantages of physics-based ML


The ability of NNs to approximate solutions to PDEs has been a fascinating field of research. The
prediction of dynamics over very long durations that surpass the training horizon over which the
network was tuned to represent the solution remains a significant issue. Due to their desirable
features, current ML methods, particularly DNNs, have significantly succeeded across
computational science areas. First, a sequence of universal approximation theorems [94-96] shows
that NNs can approximate any Borel measurable function on a compact set with arbitrary precision
given enough hidden neurons. Given enough samples and processing resources, this strong
character allows the NNs to approximate any well-defined function.

Furthermore, [88] and more recent studies [89, 90] estimate the convergence rate of approximation
error on an NNs with respect to its depth and width, which subsequently allow the NNs to be used
in scenarios with high requirements accuracy. Secondly, the development of differentiable
programming and automatic differentiation enables efficient and accurate calculation of gradients
of NNs functions with respect to inputs and parameters. These backpropagation algorithms allow
the NNs to be efficiently optimized for specified objectives. The following characteristics of NNs
have sparked interest in using them to solve PDEs. One general classification of such methods is
two classes: The first focuses on directly learning the PDEs operator[91, 92]. For example, in the
Deep Operator Network (DeepONet), the input function can be the IC/BC and parameters of the
equation mapped to the output, the PDEs solution at the target spatio-temporal coordinates. In this
approach, the NNs are trained using independent simulations and must span the space of interest.
Therefore, NNs training is predicated on many solutions that may be computationally expensive
to obtain. Still, once trained, the network evaluation is computationally efficient[93, 94]. The
second class of methods adopts the NNs as a basis function to represent a single solution. The
inputs to the network are generally the spatio-temporal coordinates of the PDEs, and the outputs
are the solution values at the given input coordinates.

The NNs are trained by minimizing the PDEs residuals and the mismatch in the IC/BC. Such
approach dates to[95], where NNs were used to solve the Poisson equation. In later studies [96,
97], the BC was imposed exactly by multiplying the NNs with certain polynomials. In [98], the
PDEs are enforced by minimizing energy functionals instead of equation residuals, different from
most existing methods. In [84], PINNs for forward and inverse (data assimilation) problems of
time-dependent PDEs are developed. To assess all the derivatives in the differential equations and
the gradients in the optimization method, PINNs use automated differentiation. Gradients in
PINNs are effectively evaluated because automated differentiation consists of analytical
derivatives of the activation functions frequently applied in a chain rule. The time dependent PDEs
are realized by minimizing the residuals at selected points in the whole spatiotemporal domain.
The cost function has another penalty term on the IC/BC if the PDEs problem is forward and a
penalty term on observations for inverse data assimilation problems. However, when the
underlying PDE solutions contain high-frequencies or multi-scale features, PINNs with fully
connected architectures frequently fail to accomplish stable training and provide correct
predictions[99, 100] [101]. Recently ascribed this pathological behavior to multi-scale interactions
between various components in the PINNs loss function, which eventually lead to stiffness in the
gradient flow dynamics, imposing severe stability requirements on the learning rate [102]. Table
4. gives Summary of requirements and possible advantages and disadvantages of physics-based
ML.

Table 4. Summary of requirements and possible advantages and disadvantages of physics-based


ML.

Physics-based ML Requirements Advantages Disadvantages


Data Quality data Required small amount data compared to Hard to get quality
non-physics-based ML data
Cost function Establish physical relation Physical consistency, improved Complex physics
using PDEs generalizations, and accuracy PDEs
Initialization Synthetic data from physics Reduced observations required, Fixed initial state is
models Improved accuracy the resulting
exploration challenge
Run time High performance device Very fast -
Architecture Based on the complex of task Intermediate physical -
variables/processes, Informed prior
distributions, Easy to implement using
existing packages Such as PINNs,
DeepONet
5. Application of Physics-based ML to Civil Engineering

There have been applications of physics-based ML models within the field of civil engineering.
Figure 6 provides a bar chart that shows the number of ML papers published from 2014 to early
2021, indicating overall growth in papers. This growth seems to follow the rising interest in
physics-based ML, which started in 2019, introducing Physics-Informed Neural Networks
(PINNs)[84].

Figure 6. Papers published from 2014 to early 2021

Sensor and signal data are mainly used when applying physics-based ML in civil engineering. In
contrast, other data sources are employed only based on the requirements. Therefore, researchers
must select and reconstruct the algorithms and network structures to solve different civil
engineering problems. ML models may learn physics due to their capacity to learn from experience:
The ML model can learn how a physical system acts and generate accurate predictions given enough
instances of how it behaves. As a result, it may be used in various engineering applications such as
damage detection, vibration identification, 3D reconstruction, anomaly data detection, etc.
According to the data types noted in the collected literature, the three main applications of physics-
based ML methods in civil engineering are
• 3D Building Information Modelling (BIM)
• Structural health monitoring system
• Structural design and analysis

5.1 Building Information Modeling (BIM)


BIM has been highlighted as a new and revolutionary technology for improving the building
industry's performance. The BIM tools commonly used in civil engineering are Autodesk’s
AutoCAD Civil 3D® and Revit Structure ®. These tools optimize and validate projects before
they are built and model how infrastructure operates in a 3D real-world setting. To complement
3D BIM, physics-based ML will be a useful tool for problem-solving in geotechnical
engineering[103] example shown in Figure 7. The steps carried out in physics-based ML method
for 3D BIM were modeling the 3D digital terrain model from point cloud; creating the horizontal
alignment, vertical profiles, and editing cross-sections; modeling the jacked tunnel; creating the
roundabout; generating the 3D parametric model of the complete road and visualizing the
infrastructure in the real-world context governed by PDEs. Table 4 provides a systematic
organization and taxonomy of the application-centric objectives and methods of existing physics-
based ML for BIM applications.

Figure 7 Cloud-to-BIM-to-FEM: Structural simulation with accurate historic BIM from laser scans [104]
Table 4. Table of literature classified by existing physics-based ML for BIM applications.

Paper Year Application


Efficient intensity measures and machine learning 2020 CFD, BIM
classification algorithms for collapse prediction informed
by physics-based ground motion simulations[105]
Enhancing predictive skills in a physically consistent 2021 CFD, BIM
way: Physics Informed Machine Learning for
Hydrological Processes[106]
Physics-Informed Autoencoders for Lyapunov-stable 2019 CFD, Turbulence modeling, BIM
Fluid Flow Prediction[107]
A Data-Driven and Physics-Based Approach to Exploring 2019 CFD, BIM
Interdependency of Interconnected Infrastructure[108]
Physics Guided Machine Learning Methods for 2020 CFD, BIM
Hydrology[109]
Modeling the dynamics of PDE systems with Physics- 2019 CFD, BIM
Constrained Deep Auto-Regressive Network[110]
A domain decomposition nonintrusive reduced order 2019 CFD, BIM
model for turbulent flows[111]

5.2 Structural Health Monitoring

Structural Health Monitoring (SHM) in the civil engineering industry faces unique challenges.
These challenges result in part from the dynamic work environments of construction. As a result
of its better capacity to detect damage and defects in civil engineering structures, physics-based
ML approaches in SHM gained much attention in recent years. Physics-based ML methods
establish a high-fidelity physical model of the structure, usually by finite element analysis, and
then establish a comparison metric between the model and the measured data from the real
structure example shown in Figure 8. In most cases, a vibration-based model updating approach
has been chosen, where the vibration or modal data is adopted as the basis for the updating process
[6].
Figure 8. Vibration Analysis of Vehicle-Bridge System Based on Multi-Body Dynamics using physics-based ML model
[114]

The experimental modal properties of civil engineering structures (say the natural frequencies,
vibration modes, and frequency response functions) may be determined by using any of the
available system identification methods, such as experimental modal analysis (EMA) or
operational modal analysis (OMA). Table 5 provides a systematic organization and taxonomy of
the application-centric objectives and methods of existing physics-based ML for SHM
applications.

Table 5. Table of literature classified by existing physics-based ML for SHM applications.

Paper Year Application


Probabilistic physics-guided machine learning for fatigue data 2020 SHM
analysis[112]
Finite element–based machine-learning approach to detect 2019 SHM, CFD
damage in bridges under operational and environmental
variations[113]
A hybrid physics-assisted machine-learning-based damage 2021 SHM
detection using Lamb wave[114]
Data-Driven and Model-Based Methods with Physics-Guided 2020 SHM
Machine Learning for Damage Identification[115]
Deep UQ: Learning deep neural network surrogate models for 2018 Uncertainty quantification, Turbulence
high dimensional uncertainty quantification.[116] modeling, SHM

5.3 Structural design and analysis


Structures designed with the internal force flow in their members can save a lot of money on
materials and labor, but it's difficult and time-consuming. However, the physics-based ML
methods have allowed geometry-based structural design methods to reemerge, particularly in three
dimensions. The complex geometric diagrams of forces can now be constructed in milliseconds
using the current digital computation, allowing structural designers and architects to explore an
unexplored realm of efficient spatial structural forms in 3D. The new design strategy A physics-
based ML technique that considers structural performance and construction limitations to speed
up topological design example shown in Figure 9. Table 6 provides a systematic organization and
taxonomy of the application-centric objectives and methods of existing physics-based ML for
structural design and analysis applications.

Figure 8. The nonlinear effects of different design variables (i.e., subdivision rules) on the final structural
permanence measures, using self-organizing maps for metal bridge using physics-based ML [117]

Table 6. Table of literature classified by existing physics-based ML for structural design and analysis
applications.

Paper Year Application


Machine learning assisted evaluations in structural design 2020 Materials science
and construction[117]

Utilizing physics-based input features within a machine 2021 Power system state estimation,
learning model to predict wind speed forecasting error[118] Aerodynamics

Application of Physics-Based Machine Learning in 2019 CFD


Combustion Modeling[119]

JUNIPR: a framework for unsupervised machine learning 2018 CFD


in particle physics[120]

Machine learning for metal additive manufacturing: 2021 CFD


predicting temperature and melt pool fluid dynamics using
physics-informed neural networks[121]

Predicting the dissolution kinetics of silicate glasses by 2019 Materials science


topology-informed machine learning[122]

Machine learning techniques for detecting topological 2019 Materials science


avatars of new physics[123]

Physics-informed machine learning for composition– 2020 Materials science


process–property design: Shape memory alloy
demonstration[124]

A novel ozone profile shape retrieval using a full-physics 2017 Materials science
inverse learning machine (FP-ILM)[125]

Machine-learning prediction of thermal transport in porous 2020 Materials science


media with physics-based descriptors[126]

Deep shape from polarization[127] 2019 Materials science

Model order reduction assisted by deep neural 2020 Structural mechanics, Materials
networks(ROM-net)[128] science
Predicting AC Optimal Power Flows: Combined Deep 2019 Electrical power systems
Learning and Lagrangian Dual Methods[129]
Deep Fluids: A Generative Network for Parameterized 2018 CFD
Fluid Simulations[130]

Multi-Fidelity Physics-Constrained Neural Network and Its 2019 Structural mechanics, Materials
Application in Materials Modeling[131] science

HybridNet: Integrating Model-based and Data-driven 2018 CFD


Learning to Predict Evolution of Dynamical Systems[132]

A composite neural network that learns from multi-fidelity 2020 Geosciences


data: Application to function approximation and inverse
PDE problems[133]

PPINN: Parareal physics-informed neural network for time- 2020 CFD, Structural mechanics
dependent PDEs[134]

A deep learning-based approach to reduced-order modeling 2018 CFD


for turbulent flow control using LSTM neural
networks.[33]

Physics-induced graph neural network: An application to 2019 CFD


wind-farm power estimation.[135]

Physics-based convolutional neural network for fault 2019 Materials science, Structural
diagnosis of rolling element bearings[136] mechanics

Machine learning closures for model order reduction of 2018 Heat transfer
thermal fluids[137]
Physics-informed machine learning approach for 2016 Materials science, Structural
reconstructing Reynolds stress modeling discrepancies mechanics
based on DNS data[138]

A reduced-order model for turbulent flows in the urban 2019 CFD


environment using machine learning.[34]

A Framework for Modeling Flood Depth Using a Hybrid of 2020 CFD


Hydraulics and Machine Learning[139]

Evaluation and machine learning improvement of global 2019 CFD


hydrological model-based flood simulations.[13]

Real-time power system state estimation via deep unrolled 2018 Power system state estimation
neural networks.[140]

Symplectic ODE-Net: Learning Hamiltonian Dynamics 2019 CFD


with Control.[141]

Physics-guided Convolutional Neural Network (PhyCNN) 2019 Structural mechanics, Materials


for Data-driven Seismic Response Modeling.[142] science

6. Future directions

Civil engineering design and construction, which is already a labor-intensive industry, face many
challenges, including an aging workforce, increased labor costs, productivity losses, and the lack
of onsite workers. All of these constraints affect industry profits. Under these circumstances,
physics-based ML will inevitably be utilized to automate some civil engineering and construction
processes. Data plays a crucial role in the applications of physics-based ML in civil engineering.
Therefore, it is essential to establish a public data set for civil engineering. For example, a similar
general-purpose dataset called ImageNet has extensively promoted research in the DL field so that
a construction-related dataset could do the same for construction automation. With these kinds of
public data sets, researchers can focus more on physics-based ML models.
7. Conclusions

As of 2000, ML technology has gradually received more attention in civil engineering and plays
an increasingly important role in developing automated technologies. However, the application of
even the state-of-the-art black-box ML models has often been met with limited success in civil
engineering due to their large data requirements, inability to produce physically consistent results,
and their lack of generalizability to out-of-sample scenarios. The main challenges are quality data
acquisition and overcoming the impact of the site environment. After thoroughly researching the
literature on this topic, this paper suggests that multiple teams could jointly establish an extensive
and complete database with the same annotation rules to ease the dilemma of data acquisition. At
present, researchers in civil engineering have primarily implemented ML as a tool for feature
extraction or detection. We envision that merging ML models and physics principles will play an
invaluable role in the future of scientific modeling to address the pressing environmental and
physical modeling problems in civil engineering. Future research would be to develop a fully
understand physics-based ML and combine them with the specific knowledge domains in civil
engineering to develop dedicated physics-based ML models for civil engineering applications.

References:

1. Vadyala, S.R., et al., Prediction of the number of covid-19 confirmed cases based on k-means-lstm.
arXiv preprint arXiv:2006.14752, 2020.
2. Vadyala, S.R. and S.N. Betgeri, Physics-Informed Neural Network Method for Solving One-
Dimensional Advection Equation Using PyTorch. arXiv preprint arXiv:2103.09662, 2021.
3. Sai Nethra Betgeri, J.C.M., David B. Smith. Comparison of Sewer Conditions Ratings with Repair
Recommendation Reports. in North American Society for Trenchless Technology (NASTT) 2021.
2021. https://member.nastt.org/products/product/2021-TM1-T6-01.
4. V Yugandhar, B.P., BS Nethra. Statistical Software Packages for Research In Social Sciences. in
Recent Research Advancements in Information Technology. 2014.
5. McGovern, A., et al., Making the black box more transparent: Understanding the physical
implications of machine learning. Bulletin of the American Meteorological Society, 2019. 100(11):
p. 2175-2199.
6. Alber, M., et al., Integrating machine learning and multiscale modeling—perspectives, challenges,
and opportunities in the biological, biomedical, and behavioral sciences. NPJ digital medicine,
2019. 2(1): p. 1-11.
7. Baker, N., et al., Workshop report on basic research needs for scientific machine learning: Core
technologies for artificial intelligence. 2019, USDOE Office of Science (SC), Washington, DC (United
States).
8. Karpatne, A., et al., Theory-guided data science: A new paradigm for scientific discovery from data.
IEEE Transactions on knowledge and data engineering, 2017. 29(10): p. 2318-2331.
9. Rai, R. and C.K. Sahu, Driven by data or derived through physics? a review of hybrid physics guided
machine learning techniques with cyber-physical system (cps) focus. IEEE Access, 2020. 8: p.
71050-71073.
10. Griewank, A. and A. Walther, Evaluating derivatives: principles and techniques of algorithmic
differentiation. 2008: SIAM.
11. Parikh, N. and S. Boyd, Proximal algorithms. Foundations and Trends in optimization, 2014. 1(3):
p. 127-239.
12. Boyd, S., N. Parikh, and E. Chu, Distributed optimization and statistical learning via the alternating
direction method of multipliers. 2011: Now Publishers Inc.
13. Yang, T., et al., Evaluation and machine learning improvement of global hydrological model-based
flood simulations. Environmental Research Letters, 2019. 14(11): p. 114027.
14. Lu, K., et al., Review for order reduction based on proper orthogonal decomposition and outlooks
of applications in mechanical systems. Mechanical Systems and Signal Processing, 2019. 123: p.
264-297.
15. Mignolet, M.P., et al., A review of indirect/non-intrusive reduced order modeling of nonlinear
geometric structures. Journal of Sound and Vibration, 2013. 332(10): p. 2437-2460.
16. Holmes, P.J., et al., Low-dimensional models of coherent structures in turbulence. Physics Reports,
1997. 287(4): p. 337-384.
17. Sarkar, A. and R. Ghanem, Mid-frequency structural dynamics with parameter uncertainty.
Computer Methods in Applied Mechanics and Engineering, 2002. 191(47-48): p. 5499-5513.
18. Chatterjee, A., An introduction to the proper orthogonal decomposition. Current science, 2000: p.
808-817.
19. Prud’Homme, C., et al., Reliable real-time solution of parametrized partial differential equations:
Reduced-basis output bound methods. J. Fluids Eng., 2002. 124(1): p. 70-80.
20. Burkardt, J., M. Gunzburger, and H.-C. Lee, Centroidal Voronoi tessellation-based reduced-order
modeling of complex systems. SIAM Journal on Scientific Computing, 2006. 28(2): p. 459-484.
21. Pearson, K. and L.O. Lines, Planes of Closest Fit to Systems of Points in Space, London Edinburgh
Dublin Philos. Mag. J. Sci, 1901. 2(11): p. 559-572.
22. Jolliffe, I.T. and J. Cadima, Principal component analysis: a review and recent developments.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering
Sciences, 2016. 374(2065): p. 20150202.
23. Couplet, M., C. Basdevant, and P. Sagaut, Calibrated reduced-order POD-Galerkin system for fluid
flow modelling. Journal of Computational Physics, 2005. 207(1): p. 192-220.
24. Zokagoa, J.-M. and A. Soulaïmani, Low-order modelling of shallow water equations for sensitivity
analysis using proper orthogonal decomposition. International Journal of Computational Fluid
Dynamics, 2012. 26(5): p. 275-295.
25. Amsallem, D. and C. Farhat, On the stability of reduced-order linearized computational fluid
dynamics models based on POD and Galerkin projection: descriptor vs non-descriptor forms, in
Reduced order methods for modeling and computational reduction. 2014, Springer. p. 215-233.
26. Zokagoa, J.-M. and A. Soulaïmani, A POD-based reduced-order model for uncertainty analyses in
shallow water flows. International Journal of Computational Fluid Dynamics, 2018. 32(6-7): p.
278-292.
27. Barthelmann, V., E. Novak, and K. Ritter, High dimensional polynomial interpolation on sparse
grids. Advances in Computational Mathematics, 2000. 12(4): p. 273-288.
28. Ghanem, R.G. and P.D. Spanos, Stochastic finite element method: Response statistics, in Stochastic
finite elements: a spectral approach. 1991, Springer. p. 101-119.
29. Sun, X., X. Pan, and J.-I. Choi, A non-intrusive reduced-order modeling method using polynomial
chaos expansion. arXiv preprint arXiv:1903.10202, 2019.
30. Bellman, R., Dynamic programming. Science, 1966. 153(3731): p. 34-37.
31. Verleysen, M. and D. François. The curse of dimensionality in data mining and time series
prediction. in International work-conference on artificial neural networks. 2005. Springer.
32. Chen, G., et al., Support-vector-machine-based reduced-order model for limit cycle oscillation
prediction of nonlinear aeroelastic system. Mathematical problems in engineering, 2012. 2012.
33. Mohan, A.T. and D.V. Gaitonde, A deep learning based approach to reduced order modeling for
turbulent flow control using LSTM neural networks. arXiv preprint arXiv:1804.09269, 2018.
34. Xiao, D., et al., A reduced order model for turbulent flows in the urban environment using machine
learning. Building and Environment, 2019. 148: p. 323-337.
35. Wan, Z.Y., et al., Data-assisted reduced-order modeling of extreme events in complex dynamical
systems. PloS one, 2018. 13(5): p. e0197704.
36. Galerkin, B. and W.I. Petrograd, Series development for some cases of equilibrium of plates and
beams. Wjestnik Ingenerow Petrograd, 1915. 19: p. 897.
37. Saltzman, B., Finite amplitude free convection as an initial value problem—I. Journal of
atmospheric sciences, 1962. 19(4): p. 329-341.
38. Lorenz, E.N., Deterministic nonperiodic flow. Journal of atmospheric sciences, 1963. 20(2): p. 130-
141.
39. Lumley, J.L., The structure of inhomogeneous turbulent flows. Atmospheric turbulence and radio
wave propagation, 1967.
40. Sirovich, L., Turbulence and the dynamics of coherent structures. I. Coherent structures. Quarterly
of applied mathematics, 1987. 45(3): p. 561-571.
41. Aubry, N., et al., The dynamics of coherent structures in the wall region of a turbulent boundary
layer. Journal of fluid Mechanics, 1988. 192: p. 115-173.
42. Rempfer, D. and H.F. Fasel, Dynamics of three-dimensional coherent structures in a flat-plate
boundary layer. Journal of Fluid Mechanics, 1994. 275: p. 257-283.
43. Everson, R. and L. Sirovich, Karhunen–Loeve procedure for gappy data. JOSA A, 1995. 12(8): p.
1657-1664.
44. Ravindran, S.S., A reduced‐order approach for optimal control of fluids using proper orthogonal
decomposition. International journal for numerical methods in fluids, 2000. 34(5): p. 425-448.
45. Chen, Z. and S. Dai, Adaptive Galerkin Methods with Error Control for a Dynamical Ginzburg--
Landau Model in Superconductivity. SIAM Journal on Numerical Analysis, 2001. 38(6): p. 1961-
1985.
46. Willcox, K. and J. Peraire, Balanced model reduction via the proper orthogonal decomposition.
AIAA journal, 2002. 40(11): p. 2323-2330.
47. Couplet, M., P. Sagaut, and C. Basdevant, Intermodal energy transfers in a proper orthogonal
decomposition–Galerkin representation of a turbulent separated flow. Journal of Fluid Mechanics,
2003. 491: p. 275-284.
48. Sirisup, S. and G.E. Karniadakis, A spectral viscosity method for correcting the long-term behavior
of POD models. Journal of Computational Physics, 2004. 194(1): p. 92-116.
49. Barrault, M., et al., An ‘empirical interpolation’method: application to efficient reduced-basis
discretization of partial differential equations. Comptes Rendus Mathematique, 2004. 339(9): p.
667-672.
50. Mezić, I., Spectral properties of dynamical systems, model reduction and decompositions.
Nonlinear Dynamics, 2005. 41(1): p. 309-325.
51. Rozza, G., D.B.P. Huynh, and A.T. Patera, Reduced basis approximation and a posteriori error
estimation for affinely parametrized elliptic coercive partial differential equations. Archives of
Computational Methods in Engineering, 2007. 15(3): p. 1.
52. Cao, Y., et al., A reduced‐order approach to four‐dimensional variational data assimilation using
proper orthogonal decomposition. International Journal for Numerical Methods in Fluids, 2007.
53(10): p. 1571-1583.
53. Amsallem, D. and C. Farhat, Interpolation method for adapting reduced-order models and
application to aeroelasticity. AIAA journal, 2008. 46(7): p. 1803-1813.
54. Astrid, P., et al., Missing point estimation in models described by proper orthogonal
decomposition. IEEE Transactions on Automatic Control, 2008. 53(10): p. 2237-2251.
55. Rowley, C.W., et al., Spectral analysis of nonlinear flows. Journal of fluid mechanics, 2009. 641: p.
115-127.
56. Schmid, P.J., Dynamic mode decomposition of numerical and experimental data. Journal of fluid
mechanics, 2010. 656: p. 5-28.
57. Chaturantabut, S. and D.C. Sorensen, Nonlinear model reduction via discrete empirical
interpolation. SIAM Journal on Scientific Computing, 2010. 32(5): p. 2737-2764.
58. Carlberg, K.T., et al., The GNAT nonlinear model-reduction method with application to large-scale
turbulent flows. 2013, Sandia National Lab.(SNL-CA), Livermore, CA (United States).
59. Cordier, L., et al., Identification strategies for model-based control. Experiments in fluids, 2013.
54(8): p. 1-21.
60. Östh, J., et al., On the need for a nonlinear subscale turbulence term in POD models as exemplified
for a high-Reynolds-number flow over an Ahmed body. Journal of Fluid Mechanics, 2014. 747: p.
518-544.
61. Ballarin, F., et al., Supremizer stabilization of POD–Galerkin approximation of parametrized steady
incompressible Navier–Stokes equations. International Journal for Numerical Methods in
Engineering, 2015. 102(5): p. 1136-1161.
62. Schlegel, M. and B.R. Noack, On long-term boundedness of Galerkin models. Journal of Fluid
Mechanics, 2015. 765: p. 325-352.
63. Peherstorfer, B. and K. Willcox, Data-driven operator inference for nonintrusive projection-based
model reduction. Computer Methods in Applied Mechanics and Engineering, 2016. 306: p. 196-
215.
64. Sieber, M., C.O. Paschereit, and K. Oberleithner, Spectral proper orthogonal decomposition.
Journal of Fluid Mechanics, 2016. 792: p. 798-828.
65. Towne, A., O.T. Schmidt, and T. Colonius, Spectral proper orthogonal decomposition and its
relationship to dynamic mode decomposition and resolvent analysis. Journal of Fluid Mechanics,
2018. 847: p. 821-867.
66. Reiss, J., et al., The shifted proper orthogonal decomposition: A mode decomposition for multiple
transport phenomena. SIAM Journal on Scientific Computing, 2018. 40(3): p. A1322-A1344.
67. Loiseau, J.-C., B.R. Noack, and S.L. Brunton, Sparse reduced-order modelling: sensor-based
dynamics to full-state estimation. Journal of Fluid Mechanics, 2018. 844: p. 459-490.
68. Mendez, M., M. Balabane, and J.-M. Buchlin, Multi-scale proper orthogonal decomposition of
complex fluid flows. Journal of Fluid Mechanics, 2019. 870: p. 988-1036.
69. Fernex, D., B.R. Noack, and R. Semaan, Cluster-based network modeling—From snapshots to
complex dynamical systems. Science Advances, 2021. 7(25): p. eabf5006.
70. Rosenblatt, F., The perceptron: a probabilistic model for information storage and organization in
the brain. Psychological review, 1958. 65(6): p. 386.
71. Linnainmaa, S., Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics,
1976. 16(2): p. 146-160.
72. Rumelhart, D.E., G.E. Hinton, and R.J. Williams, Learning representations by back-propagating
errors. nature, 1986. 323(6088): p. 533-536.
73. Hopfield, J.J., Neural networks and physical systems with emergent collective computational
abilities. Proceedings of the national academy of sciences, 1982. 79(8): p. 2554-2558.
74. Hochreiter, S. and J. Schmidhuber, Long short-term memory. Neural computation, 1997. 9(8): p.
1735-1780.
75. Dechter, R., Learning while searching in constraint-satisfaction problems. 1986.
76. Goodfellow, I., Y. Bengio, and A. Courville, Deep learning. 2016: MIT press.
77. Brunton, S.L., J.L. Proctor, and J.N. Kutz, Discovering governing equations from data by sparse
identification of nonlinear dynamical systems. Proceedings of the national academy of sciences,
2016. 113(15): p. 3932-3937.
78. Kutz, J.N., Deep learning in fluid dynamics. Journal of Fluid Mechanics, 2017. 814: p. 1-4.
79. Carlberg, K.T., et al., Recovering missing CFD data for high-order discretizations using deep neural
networks and dynamics learning. Journal of Computational Physics, 2019. 395: p. 105-124.
80. Hanna, B.N., et al., Machine-learning based error prediction approach for coarse-grid
Computational Fluid Dynamics (CG-CFD). Progress in Nuclear Energy, 2020. 118: p. 103140.
81. Després, B. and H. Jourdren, Machine Learning design of Volume of Fluid schemes for compressible
flows. Journal of Computational Physics, 2020. 408: p. 109275.
82. Hsieh, W.W., Machine learning methods in the environmental sciences: Neural networks and
kernels. 2009: Cambridge university press.
83. Raissi, M., P. Perdikaris, and G.E. Karniadakis, Machine learning of linear differential equations
using Gaussian processes. Journal of Computational Physics, 2017. 348: p. 683-693.
84. Raissi, M., P. Perdikaris, and G.E. Karniadakis, Physics-informed neural networks: A deep learning
framework for solving forward and inverse problems involving nonlinear partial differential
equations. Journal of Computational Physics, 2019. 378: p. 686-707.
85. Raissi, M., et al., Deep learning of vortex-induced vibrations. Journal of Fluid Mechanics, 2019.
861: p. 119-137.
86. Lapidus, L. and G.F. Pinder, Numerical solution of partial differential equations in science and
engineering. 2011: John Wiley & Sons.
87. Williams, C.K. and C.E. Rasmussen, Gaussian processes for machine learning. Vol. 2. 2006: MIT
press Cambridge, MA.
88. Barron, A.R., Universal approximation bounds for superpositions of a sigmoidal function. IEEE
Transactions on Information theory, 1993. 39(3): p. 930-945.
89. Lu, J., et al., Deep network approximation for smooth functions. arXiv preprint arXiv:2001.03040,
2020.
90. Yarotsky, D. Optimal approximation of continuous functions by very deep ReLU networks. in
Conference on Learning Theory. 2018. PMLR.
91. Li, Z., et al., Fourier neural operator for parametric partial differential equations. arXiv preprint
arXiv:2010.08895, 2020.
92. Lu, L., P. Jin, and G.E. Karniadakis, Deeponet: Learning nonlinear operators for identifying
differential equations based on the universal approximation theorem of operators. arXiv preprint
arXiv:1910.03193, 2019.
93. Cai, S., et al., DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator
approximation by neural networks. Journal of Computational Physics, 2021. 436: p. 110296.
94. Mao, Z., et al., DeepM&Mnet for hypersonics: Predicting the coupled flow and finite-rate chemistry
behind a normal shock using neural-network approximation of operators. arXiv preprint
arXiv:2011.03349, 2020.
95. Dissanayake, M. and N. Phan‐Thien, Neural‐network‐based approximations for solving partial
differential equations. communications in Numerical Methods in Engineering, 1994. 10(3): p. 195-
201.
96. Lagaris, I.E., A. Likas, and D.I. Fotiadis, Artificial neural networks for solving ordinary and partial
differential equations. IEEE transactions on neural networks, 1998. 9(5): p. 987-1000.
97. Berg, J. and K. Nyström, A unified deep artificial neural network approach to partial differential
equations in complex geometries. Neurocomputing, 2018. 317: p. 28-41.
98. Weinan, E. and B. Yu, The deep Ritz method: a deep learning-based numerical algorithm for solving
variational problems. Communications in Mathematics and Statistics, 2018. 6(1): p. 1-12.
99. Zhu, Y., et al., Physics-constrained deep learning for high-dimensional surrogate modeling and
uncertainty quantification without labeled data. Journal of Computational Physics, 2019. 394: p.
56-81.
100. Fuks, O. and H.A. Tchelepi, Limitations of physics informed machine learning for nonlinear two-
phase transport in porous media. Journal of Machine Learning for Modeling and Computing, 2020.
1(1).
101. Raissi, M., Deep hidden physics models: Deep learning of nonlinear partial differential equations.
The Journal of Machine Learning Research, 2018. 19(1): p. 932-955.
102. Wang, S., Y. Teng, and P. Perdikaris, Understanding and mitigating gradient pathologies in physics-
informed neural networks. arXiv preprint arXiv:2001.04536, 2020.
103. Yitmen, I., et al., An Adapted Model of Cognitive Digital Twins for Building Lifecycle Management.
Applied Sciences, 2021. 11(9): p. 4276.
104. Barazzetti, L., et al., Cloud-to-BIM-to-FEM: Structural simulation with accurate historic BIM from
laser scans. Simulation Modelling Practice and Theory, 2015. 57: p. 71-87.
105. Bijelić, N., T. Lin, and G.G. Deierlein, Efficient intensity measures and machine learning
classification algorithms for collapse prediction informed by physics-based ground motion
simulations. Earthquake Spectra, 2020: p. 1188-1207.
106. Bhasme, P., J. Vagadiya, and U. Bhatia, Enhancing predictive skills in physically-consistent way:
Physics Informed Machine Learning for Hydrological Processes. arXiv preprint arXiv:2104.11009,
2021.
107. Erichson, N.B., M. Muehlebach, and M.W. Mahoney, Physics-informed autoencoders for
Lyapunov-stable fluid flow prediction. arXiv preprint arXiv:1905.10866, 2019.
108. Zhou, S., et al., A Data-Driven and Physics-Based Approach to Exploring Interdependency of
Interconnected Infrastructure, in Computing in Civil Engineering 2019: Data, Sensing, and
Analytics. 2019, American Society of Civil Engineers Reston, VA. p. 82-88.
109. Khandelwal, A., et al., Physics guided machine learning methods for hydrology. arXiv preprint
arXiv:2012.02854, 2020.
110. Geneva, N. and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep
auto-regressive networks. Journal of Computational Physics, 2020. 403: p. 109056.
111. Xiao, D., et al., A domain decomposition non-intrusive reduced order model for turbulent flows.
Computers & Fluids, 2019. 182: p. 15-27.
112. Chen, J. and Y. Liu, Probabilistic physics-guided machine learning for fatigue data analysis. Expert
Systems with Applications, 2021. 168: p. 114316.
113. Figueiredo, E., et al., Finite element–based machine-learning approach to detect damage in
bridges under operational and environmental variations. Journal of Bridge Engineering, 2019.
24(7): p. 04019061.
114. Rai, A. and M. Mitra, A hybrid physics-assisted machine-learning-based damage detection using
Lamb wave. Sādhanā, 2021. 46(2): p. 1-11.
115. Zhang, Z., Data-Driven and Model-Based Methods with Physics-Guided Machine Learning for
Damage Identification. 2020.
116. Tripathy, R.K. and I. Bilionis, Deep UQ: Learning deep neural network surrogate models for high
dimensional uncertainty quantification. Journal of computational physics, 2018. 375: p. 565-588.
117. Zheng, H., V. Moosavi, and M. Akbarzadeh, Machine learning assisted evaluations in structural
design and construction. Automation in Construction, 2020. 119: p. 103346.
118. Vassallo, D., R. Krishnamurthy, and H.J. Fernando, Utilizing physics-based input features within a
machine learning model to predict wind speed forecasting error. Wind Energy Science, 2021. 6(1):
p. 295-309.
119. Takbiri-Borujeni, A. and M. Ayoobi. Application of physics-based machine learning in combustion
modeling. in 11th US National Combustion Meeting. 2019.
120. Andreassen, A., et al., JUNIPR: a framework for unsupervised machine learning in particle physics.
The European Physical Journal C, 2019. 79(2): p. 1-24.
121. Zhu, Q., Z. Liu, and J. Yan, Machine learning for metal additive manufacturing: predicting
temperature and melt pool fluid dynamics using physics-informed neural networks. Computational
Mechanics, 2021. 67(2): p. 619-635.
122. Liu, H., et al., Predicting the dissolution kinetics of silicate glasses by topology-informed machine
learning. Npj Materials Degradation, 2019. 3(1): p. 1-12.
123. Bevan, A., Machine learning techniques for detecting topological avatars of new physics.
Philosophical Transactions of the Royal Society A, 2019. 377(2161): p. 20190392.
124. Liu, S., et al., Physics-informed machine learning for composition–process–property design: Shape
memory alloy demonstration. Applied Materials Today, 2021. 22: p. 100898.
125. Xu, J., et al., A novel ozone profile shape retrieval using full-physics inverse learning machine (FP-
ILM). IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017.
10(12): p. 5442-5457.
126. Wei, H., H. Bao, and X. Ruan, Machine learning prediction of thermal transport in porous media
with physics-based descriptors. International Journal of Heat and Mass Transfer, 2020. 160: p.
120176.
127. Ba, Y., et al. Deep shape from polarization. in Computer Vision–ECCV 2020: 16th European
Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16. 2020. Springer.
128. Daniel, T., et al., Model order reduction assisted by deep neural networks (ROM-net). Advanced
Modeling and Simulation in Engineering Sciences, 2020. 7(1): p. 1-27.
129. Fioretto, F., T.W. Mak, and P. Van Hentenryck. Predicting AC optimal power flows: Combining deep
learning and lagrangian dual methods. in Proceedings of the AAAI Conference on Artificial
Intelligence. 2020.
130. Kim, B., et al. Deep fluids: A generative network for parameterized fluid simulations. in Computer
Graphics Forum. 2019. Wiley Online Library.
131. Liu, D. and Y. Wang, Multi-fidelity physics-constrained neural network and its application in
materials modeling. Journal of Mechanical Design, 2019. 141(12).
132. Long, Y., X. She, and S. Mukhopadhyay. HybridNet: integrating model-based and data-driven
learning to predict evolution of dynamical systems. in Conference on Robot Learning. 2018. PMLR.
133. Meng, X. and G.E. Karniadakis, A composite neural network that learns from multi-fidelity data:
Application to function approximation and inverse PDE problems. Journal of Computational
Physics, 2020. 401: p. 109020.
134. Meng, X., et al., PPINN: Parareal physics-informed neural network for time-dependent PDEs.
Computer Methods in Applied Mechanics and Engineering, 2020. 370: p. 113250.
135. Park, J. and J. Park, Physics-induced graph neural network: An application to wind-farm power
estimation. Energy, 2019. 187: p. 115883.
136. Sadoughi, M. and C. Hu, Physics-based convolutional neural network for fault diagnosis of rolling
element bearings. IEEE Sensors Journal, 2019. 19(11): p. 4181-4192.
137. San, O. and R. Maulik, Machine learning closures for model order reduction of thermal fluids.
Applied Mathematical Modelling, 2018. 60: p. 681-710.
138. Wang, J.-X., J.-L. Wu, and H. Xiao, Physics-informed machine learning approach for reconstructing
Reynolds stress modeling discrepancies based on DNS data. Physical Review Fluids, 2017. 2(3): p.
034603.
139. Hosseiny, H., et al., A framework for modeling flood depth using a hybrid of hydraulics and
machine learning. Scientific Reports, 2020. 10(1): p. 1-14.
140. Zhang, L., G. Wang, and G.B. Giannakis. Real-time power system state estimation via deep unrolled
neural networks. in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP).
2018. IEEE.
141. Zhong, Y.D., B. Dey, and A. Chakraborty, Symplectic ode-net: Learning hamiltonian dynamics with
control. arXiv preprint arXiv:1909.12077, 2019.
142. Zhang, R., Y. Liu, and H. Sun, Physics-guided convolutional neural network (PhyCNN) for data-
driven seismic response modeling. Engineering Structures, 2020. 215: p. 110704.

You might also like