0% found this document useful (0 votes)
6 views

A composite neural network that learns from multi-fidelity data

The document presents a composite neural network designed to learn from multi-fidelity data, addressing the challenge of limited high-fidelity data in various applications. It introduces a multi-fidelity neural network (NN) architecture that combines low- and high-fidelity data to improve function approximation and solve inverse PDE problems. The proposed method demonstrates high accuracy with minimal high-fidelity data and is adaptable to high-dimensional regression and classification tasks.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

A composite neural network that learns from multi-fidelity data

The document presents a composite neural network designed to learn from multi-fidelity data, addressing the challenge of limited high-fidelity data in various applications. It introduces a multi-fidelity neural network (NN) architecture that combines low- and high-fidelity data to improve function approximation and solve inverse PDE problems. The proposed method demonstrates high accuracy with minimal high-fidelity data and is adaptable to high-dimensional regression and classification tasks.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Journal of Computational Physics 401 (2020) 109020

Contents lists available at ScienceDirect

Journal of Computational Physics


www.elsevier.com/locate/jcp

A composite neural network that learns from multi-fidelity


data: Application to function approximation and inverse PDE
problems
Xuhui Meng a , George Em Karniadakis a,b,∗
a
Division of Applied Mathematics, Brown University, Providence, RI, 02912, USA
b
Pacific Northwest National Laboratory, Richland, WA, 99354, USA

a r t i c l e i n f o a b s t r a c t

Article history: Currently the training of neural networks relies on data of comparable accuracy but in
Received 27 February 2019 real applications only a very small set of high-fidelity data is available while inexpensive
Received in revised form 28 August 2019 lower fidelity data may be plentiful. We propose a new composite neural network
Accepted 7 October 2019
(NN) that can be trained based on multi-fidelity data. It is comprised of three NNs,
Available online 11 October 2019
with the first NN trained using the low-fidelity data and coupled to two high-fidelity
Keywords: NNs, one with activation functions and another one without, in order to discover and
Multi-fidelity exploit nonlinear and linear correlations, respectively, between the low-fidelity and the
Physics-informed neural networks high-fidelity data. We first demonstrate the accuracy of the new multi-fidelity NN for
Adversarial data approximating some standard benchmark functions but also a 20-dimensional function
Porous media that is not easy to approximate with other methods, e.g. Gaussian process regression.
Reactive transport Subsequently, we extend the recently developed physics-informed neural networks (PINNs)
to be trained with multi-fidelity data sets (MPINNs). MPINNs contain four fully-connected
neural networks, where the first one approximates the low-fidelity data, while the second
and third construct the correlation between the low- and high-fidelity data and produce
the multi-fidelity approximation, which is then used in the last NN that encodes the
partial differential equations (PDEs). Specifically, by decomposing the correlation into a
linear and nonlinear part, the present model is capable of learning both the linear and
complex nonlinear correlations between the low- and high-fidelity data adaptively. By
training the MPINNs, we can: (1) obtain the correlation between the low- and high-fidelity
data, (2) infer the quantities of interest based on a few scattered data, and (3) identify
the unknown parameters in the PDEs. In particular, we employ the MPINNs to learn the
hydraulic conductivity field for unsaturated flows as well as the reactive models for reactive
transport. The results demonstrate that MPINNs can achieve relatively high accuracy based
on a very small set of high-fidelity data. Despite the relatively low dimension and limited
number of fidelities (two-fidelity levels) for the benchmark problems in the present study,
the proposed model can be readily extended to very high-dimensional regression and
classification problems involving multi-fidelity data.
© 2019 Elsevier Inc. All rights reserved.

* Corresponding author at: Division of Applied Mathematics, Brown University, Providence, RI, 02912, USA.
E-mail address: [email protected] (G.E. Karniadakis).

https://doi.org/10.1016/j.jcp.2019.109020
0021-9991/© 2019 Elsevier Inc. All rights reserved.
2 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

1. Introduction

The recent rapid developments in deep learning have also influenced the computational modeling of physical systems,
e.g. in geosciences and engineering [1–5]. Generally, large numbers of high-fidelity data sets are required for optimization of
complex physical systems, which may lead to computationally prohibitive costs. On the other hand, inadequate high-fidelity
data result in inaccurate approximations and possibly erroneous designs. Multi-fidelity modeling has been shown to be both
efficient and effective in achieving high accuracy in diverse applications by leveraging both the low- and high-fidelity data
[6–9]. In the framework of multi-fidelity modeling, we assume that accurate but expensive high-fidelity data are scarce,
while the cheaper and less accurate low-fidelity data are abundant. An example is the use of a few experimental measure-
ments, which are hard to obtain, combined with synthetic data obtained from running a computational model. In many
cases, the low-fidelity data can supply useful information on the trends for high-fidelity data, hence multi-fidelity model-
ing can greatly enhance prediction accuracy based on a small set of high-fidelity data in comparison to the single-fidelity
modeling [6,10,11].
The construction of cross-correlation between the low- and high-fidelity data is crucial in multi-fidelity methods. Several
methods have been developed to estimate such correlations, such as the response surface models [12,13], polynomial chaos
expansion [14,15], Gaussian process regression (GPR) [7,9,10,16], artificial neural networks [17], and moving least squares
[18,19]. Interested readers can refer to [20] for a comprehensive review of these methods. Among all the existing meth-
ods, the Gaussian process regression in combination with the linear autoregressive scheme has drawn much attention in
a wide range of applications [9,21]. For instance, Babaee et al. applied this approach for the mixed convection to propose
an improved correlation for heat transfer, which outperforms existing empirical correlation [21]. We note that GPR with a
linear autoregressive scheme can only capture the linear correlation between the low- and high-fidelity data. Perdikaris et
al. then extended the method in [6] to enable it of learning complex nonlinear correlations [10]; this has been successfully
employed to estimate the hydraulic conductivity based on the multi-fidelity data for pressure head in subsurface flows [22].
Although great progress has already been made, the multi-fidelity approaches based on GPR still have some limitations, e.g.,
approximations of discontinuous functions [8], high-dimensional problems [10], and inverse problems with strong nonlin-
earities (i.e., nonlinear partial differential equations) [9]. In addition, optimization for GPR is quite difficult to implement.
Therefore, multi-fidelity approaches which can overcome these drawbacks are urgently needed.
Deep neural networks can easily handle problems with almost any nonlinearities at both low- and high-dimensions.
In addition, the recently proposed physics-informed neural networks (PINNs) have shown expressive power for learning the
unknown parameters or functions in inverse PDE problems with nonlinearities [23]. Examples of successful applications of
PINNs include (1) learning the velocity and pressure fields based on partial observations of spatial-temporal visualizations of
a passive scalar, i.e., solute concentration [24], and (2) estimation of the unknown constitutive relationship in the nonlinear
diffusion equation for unsaturated flows [25]. Despite the expressive power of PINNs, it has been documented that a large
set of high-fidelity data is required for identifying the unknown parameters in nonlinear PDEs. To leverage the merits
of deep neural networks (DNNs) and the concept of multi-fidelity modeling, we propose to develop multi-fidelity DNNs
and multi-fidelity PINNs (MPINNs), which are expected to have the following attractive features: (1) they can learn both
the linear and nonlinear correlations adaptively; (2) they are suitable for high-dimensional problems; (3) they can handle
inverse problems with strong nonlinearities; and (4) they are easy to implement, as we demonstrate in the present work.
The rest of the paper is organized as follows: the key concepts of multi-fidelity DNNs and MPINNs are presented in
Sec. 2, while results for function approximation and inverse PDE problems are shown in Sec. 3. Finally, a summary for this
work is given in Sec. 4. In the Appendix we include a basic review of the embedding theory.

2. Multi-fidelity deep neural networks and MPINNs

The key starting point in multi-fidelity modeling is to discover and exploit the relation between low- and high-fidelity
data [20]. A widely used comprehensive correlation is expressed as

y H = ρ (x) y L + δ(x), (1)


where y L and y H are, respectively [20], the low- and high-fidelity data, ρ (x) is the multiplicative correlation surrogate, and
δ(x) is the additive correlation surrogate. It is clear that multi-fidelity models based on this relation are only capable of
handling linear correlations between the two-fidelity data. However, there exist many interesting cases that go beyond the
linear correlation in Eq. (1) [10]. For instance, the correlation for the low-fidelity experimental data and the high-fidelity
direct numerical simulations in the mixed convection flows past a cylinder is nonlinear [10,21]. In order to capture the
nonlinear correlation, we put forth a generalized autoregressive scheme, which is expressed as

y H = F ( y L ) + δ(x), (2)
where F (.) is an unknown (linear/nonlinear) function that maps the low-fidelity data to the high-fidelity level. We can
further write Eq. (2) as

y H = F (x, y L ). (3)
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 3

Fig. 1. Schematic of the multi-fidelity DNN and MPINN. The left box (blue nodes) represents the low-fidelity DNN N N L (x, θ) connected to the box with
green dots representing two high fidelity DNNs, N N H i (x, y L , γi ) (i = 1,2). In the case of MPINN, the combined output of the two high-fidelity DNNs is
input to an additional PDE-induced DNN. Here ∂  = ∂t , ∂x , ∂ y , ∂x2 , ∂ y2 , ... y H denotes symbolically the last DNN that has a very complicated graph and its
structure is determined by the specific PDE considered. (For interpretation of the colors in the figure(s), the reader is referred to the web version of this
article.)

To explore the linear/nonlinear correlation adaptively, we then decompose F (.) into two parts, i.e., the linear and nonlinear
parts, which are expressed as

F = Fl + Fnl , (4)
where Fl and Fnl denote the linear and nonlinear terms in F , respectively. Now, we construct the correlation as

y H = Fl (x, y L ) + Fnl (x, y L ). (5)


The architecture of the proposed multi-fidelity DNN and MPINN is illustrated in Fig. 1, which is composed of four
fully-connected neural networks. The first one N N L (x L , θ) is employed to approximate the low-fidelity data, while the
second and third NNs (N N H i (x, y L , β, γi ), i = 1, 2) are for approximating the correlation for the low- and high-fidelity
data; the last NN (N N f e ) is induced by encoding the governing equations, e.g. the partial differential equations (PDEs). In
addition, Fl = N N H 1 , and Fnl = N N H 2 ; θ , β , and γi , i = 1, 2 are unknown parameters of the NNs, which can be learned
by minimizing the following loss function:

M S E = M S E yL + M S E y H + M S E fe + λ βi2 , (6)

where

1  ∗ 
yLN

M S E yL = | y L − y L |2 + |∇ y ∗L − ∇ y L |2 , (7)
N yL
i =1

 
N yH
1
M S E yH = | y ∗H − y H |2 , (8)
N yH
i =1

1  
f N

M S E fe = | f e∗ − f e |2 . (9)
Nf
i =1

Here, ψ (ψ = y ∗L , y ∗H , and f e∗ ) denote the outputs of the N N L , N N H , and N N f e , β is any weight in N N L and N N H 2 ,
and λ is the L 2 regularization rates for β . The L 2 regularization has been widely adopted to prevent overfitting [26,27],
which is also used here to reduce the overfitting in both N N L and N N H 2 . In addition, we can also penalize ∇ y L if
the gradient of the low-fidelity data is available, which helps the approximation of y L . It is worth mentioning that the
boundary/initial conditions for f e can also be added into the loss function, in a similar fashion as in the standard PINNs
introduced in detail in [23] so we do not elaborate on this issue here. In the present study, the loss function is optimized
using the L-BFGS method together with Xavier’s initialization method, while the hyperbolic tangent function is employed as
the activation function in N N L and N N H 2 . We note that no activation function is included in N N H 1 due to the fact that
it is used to approximate the linear part of F .
Finally, the rationale behind the linear/nonlinear decomposition in Eq. (5) is explained in detail here. In general, one has
no prior knowledge on the correlation between the low- and high-fidelity data, which needs to be learned based on the
4 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

Fig. 2. Approximation of a continuous function from multi-fidelity data with linear correlation. (a) Training data at low- (11 data points) and high-fidelity
levels (4 data points). (b) Predictions from DNN using high-fidelity data only; also included are the results of Kriging. (c) Predictions from the multi-fidelity
DNN (Red dashed line), multi-fidelity DNN without N N H 1 (Blue dotted line), and Co-Kriging [7]) (Magenta dash-dotted line). (d) The Red dashed line
in the (x, y L , y H ) plane represents Eq. (5) (on top of the exact Black solid line) and the Red dashed line in the ( y L , y H ) plane represents the correlation
discovered between the high- and low-fidelity data ( y H = 2.007 y L − 19.963x + 20.007 + , where is the nonlinear part, which is close to zero here);
the Blue solid line is the exact correlation ( y H = 2 y L − 20x + 20).

given data. For a nonlinear correlation case, the training loss for the N N H 2 can be much smaller than that of N N H 1 , which
makes the present approach favor the nonlinear correlation. While for the linear correlation case, the training losses for the
N N H 1 and N N H 2 can be comparable if no regularization is included in the N N H 2 . By incorporating the regularization for
N N H 2 , the multi-fidelity DNN tends towards the linear correlation between the low- and high-fidelity data. Therefore, the
present multi-fidelity framework can explore the linear/nonlinear correlation adaptively. To demonstrate the effectiveness of
the present approach, we include both N N H 1 and N N H 2 in all the following test cases.

3. Results and discussion

Next we present several tests of the multi-fidelity DNN as well as the MPINN, the latter in the context of two inverse
PDE problems related to geophysical applications.

3.1. Function approximation

We first demonstrate the effectiveness of this multi-fidelity modeling in approximating both continuous and discontinu-
ous functions based on both linear and complicated nonlinear correlations between the low- and high-fidelity data.

3.1.1. Continuous function with linear correlation


We first consider a pedagogical example of approximating an one-dimensional function based on data from two levels
of fidelities. The low- and high-fidelity data are generated from:

y L (x) = A (6x − 2)2 sin(12x − 4) + B (x − 0.5) + C , x ∈ [0, 1] (10)


2
y H (x) = (6x − 2) sin(12x − 4), (11)

where y H is linear with y L , and A = 0.5, B = 10, C = -5. As shown in Fig. 2(a), the training data at the low- and high-fidelity
level are x L = {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1} and x H = {0, 0.4, 0.6, 1}, respectively.
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 5

Table 1
Mean relative L 2 (×10−3 ) for NNs with different
sizes.
Width
4 8 16 32
Depth
1 3.1 3.0 4.6 2.9
2 3.4 3.0 3.0 3.1
3 3.1 3.1 3.1 3.0
4 3.0 3.0 3.0 3.0

We first try to predict the true function using the high-fidelity data only. For this case, we only need to keep N N H 2
(Fig. 1). In addition, the input for N N H 2 becomes x because no low-fidelity data are available. Here 4 hidden layers and
20 neurons per layer are adopted in N N H 2 and no regularization is used. The learning rate is set as 0.001. As we can see
in Fig. 2(b), the present model provides inaccurate predictions due to the lack of sufficient high-fidelity data. Furthermore,
we also plot the predictive posterior means of the Kriging [7], which is noted to be similar as the results from the N N H 2 .
Keeping the high-fidelity data fixed, we try to improve the accuracy of prediction by adding low-fidelity data (Fig. 2(a)).
In this case, the last DNN for the PDE is discarded. Here 2 hidden layers and 20 neurons per layer are used in N N L ,
while 2 hidden layers with 10 neurons per layer are employed for N N H 2 , and no hidden layer is used in N N H 1 (The
size of N N H 1 is kept identical in all of the following cases). The regularization rate is set to λ = 10−2 with a learning
rate 0.001. As shown in Fig. 2(c), the present model provides accurate predictions for the high-fidelity profile. In addition,
the prediction using the Co-Kriging is displayed in Fig. 2(c) [7]. We see that the learned profiles from these two methods
are similar, while the result from the present model is slightly better than the Co-Kriging, which can be seen in the inset
of Fig. 2(c). Finally, the estimated correlation is illustrated in Fig. 2(d), which also agrees quite well with the exact result.
Unlike the Co-Kriging/GPR, no prior knowledge on the correlation between the low- and high-fidelity data is needed in the
multi-fidelity DNN, indicating that the present model can learn the correlation dynamically based on the given data.
To demonstrate the effectiveness of the decomposition of the linear and nonlinear correlations between the low- and
high-fidelity data, we further plot the predictions using the multi-fidelity DNN without the N N H 1 in Fig. 2(c). As observed,
the predicted high-fidelity profile shows little agreement with the exact solution. It is reasonable that the DNN with non-
linear activation functions can hardly approximate the linear correlation based on such scarce high-fidelity data, but it can
be significantly improved (red dashed line in Fig. 2(c)) by incorporating the N N H 1 in the multi-fidelity DNN.
The size of the neural network (e.g., depth and width) has a strong effect on the predictive accuracy [23], which is also
investigated here. Since we have sufficient low-fidelity data, it is easy to find an appropriate size for N N L to approximate
the low-fidelity function. Therefore, the particular focus is put on the size of N N H 2 due to the fact that the few high-fidelity
data may yield overfitting. Note that since the correlation between the low- and high-fidelity data is relatively simple, there
is no need to set the N N H 2 to have a large size. Hence, we limit the ranges of the depth (i.e., l) and width (i.e., w) as:
l ∈ [1, 4] and w ∈ [2, 32], respectively. Considering that a random initialization is utilized, we perform ten runs for each case
with different depth and width. The mean and standard deviation for the relative L 2 errors defined as

n= N  ∗
1  − y j )2 n= N
j(y j n =1 ( E n − E )2
E= ,σ= , (12)
N
n =1
y 2j N

are used to quantify the effect of the size of the N N H 2 . In Eq. (12), E is the mean relative L 2 errors, n is the index of
each run, N is the total number of runs (N = 10), j is the index for each sample data points, E n is the relative L 2 error for
the n − th run, and the definitions of y ∗ and y are the same as those in Sec. 1. As shown in Table 1, the computational
errors for N N H 2 with different depth and width are almost the same. In addition, the standard deviation for the relative L 2
errors are not presented because they are less than 10−5 for each case. All these results demonstrate the robustness of the
multi-fidelity DNNs. To reduce the computational cost as well and retain the accuracy, a good choice for the size of N N H 2
may be l ∈ [1, 2] and w ∈ [4, 20] in low dimensions.

3.1.2. Discontinuous function with linear correlation


As mentioned in [8], the approximation of a discontinuous function using GPR is challenging due to the continuous
kernel employed. We then proceed to test the capability of the present model for approximating discontinuous functions.
The low- and high-fidelity data are generated by the following “Forrester” functions with jump [8]:

0.5(6x − 2)2 sin(12x − 4) + 10(x − 0.5) − 5, 0 ≤ x ≤ 0.5,


y L (x) = (13)
3 + 0.5(6x − 2)2 sin(12x − 4) + 10(x − 0.5) − 5, 0.5 < x ≤ 1,
and
2 y L (x) − 20x + 20, 0 ≤ x ≤ 0.5,
y H (x) = (14)
4 + 2 y L (x) − 20x + 20, 0.5 < x ≤ 1.
6 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

Fig. 3. Approximation of a discontinuous function from multi-fidelity data with linear correlation. (a) Training data at low- (38 data points) and high-fidelity
levels (5 data points). (b) Predictions from DNN using high-fidelity data only (Red dash line); also included is the exact curve (Black solid line). (c)
Predictions from multi-fidelity DNN for high-fidelity (Red dash line). (d) The Red dashed line in the (x, y L , y H ) plane represents Eq. (5) (on top of the exact
Black solid line) and the Red dashed line in the ( y L , y H ) plane represents the correlation discovered between the high- and low-fidelity data; the Blue line
is the exact correlation.

As illustrated in Fig. 3(a), 38 and 5 sampling data points are employed as the training data at the low- and high-fidelity
level, respectively. The learning rate is again set as 0.001 for all test cases here. Similarly, we employ the N N H 2 (l × w =
4 × 20) to predict the high-fidelity values on the basis of the given high-fidelity data only, but the corresponding prediction
is not good (Fig. 3(b)). However, using the multi-fidelity data, the present model can provide quite accurate predictions
for the high-fidelity profile (Fig. 3(c)). Remarkably, the multi-fidelity DNN can capture the discontinuity at x = 0.5 at the
high-fidelity level quite well even though no data are available in the range 0.4 < x < 0.6. This is reasonable because the
low- and high-fidelity data share the same trend as 0.4 < x < 0.6, yielding the correct predictions of the high-fidelity values
in this zone. Furthermore, the learned correlation is displayed in Fig. 3(d), which shows only slight differences from the
exact correlation.

3.1.3. Continuous function with nonlinear correlation


To test the present model for capturing complicated nonlinear correlations between the low- and high-fidelity data, we
further consider the following case [10]:

y L (x) = sin(8π x), x ∈ [0, 1], (15)



y H (x) = (x − 2) y 2L (x). (16)
Here, we employ 51 and 14 data points (uniformly distributed) for low- and high-fidelity, respectively, as the training data,
(Fig. 4(a)). The learning rate for all test cases is still 0.001. As before, the N N H 2 (l × w = 4 × 20) cannot provide accurate
predictions for the high-fidelity values using only the few high-fidelity data points as displayed in Fig. 4(b). We then test
the performance of the multi-fidelity DNN based on the multi-fidelity training data. Four hidden layers and 20 neurons per
layer are used in N N L , and 2 hidden layers with 10 neurons per layer are utilized for N N H 2 . Again, the predicted profile
from the present model agrees well with the exact profile at the high-fidelity level, as shown in Fig. 4(c). It is interesting
to find that the multi-fidelity DNN can still provide accurate predictions for the high-fidelity profile even though the trend
for the low-fidelity data is opposite to that of the high-fidelity data, e.g., 0 < x < 0.2, a case of adversarial type of data. In
addition, the learned correlation between the low- and high-fidelity data agrees well with the exact one as illustrated in
Fig. 4(d), indicating that the multi-fidelity DNN is capable of discovering the non-trivial underlying correlation on the basis
of training data.
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 7

Fig. 4. Approximation of a continuous function from multi-fidelity data with nonlinear correlation. (a) Training data at low- (51 data points) and high-fidelity
levels (14 data points). Black solid line: High-fidelity values, Black dashed line: Low-fidelity values, Red cross: High-fidelity training data, Blue circle:
Low-fidelity training data. (b) Predictions from high-fidelity DNN (Red dashed line); Black solid line: Exact values. (c) Predictions from multi-fidelity DNN
for high-fidelity (Red dash line). (d) The Red dashed line in (x, y L , y H ) represents Eq. (5) (on top of the exact Black solid line) and the Red dashed line in
( y L , y H ) represents the correlation discovered between the high- and low-fidelity data; the Blue line is the exact correlation.

3.1.4. Phase-shifted oscillations


For more complicated correlations between the low- and high-fidelity data, we can easily extend the multi-fidelity DNN
based on the “embedding theory” to enhance the capability for learning more complex correlations [28] (For more details
on the embedding theory, refer to Appendix A). Here, we consider the following low-/high-fidelity functions with phase
errors [28]:

y H (x) = x2 + sin2 (8π x + π /10), (17)


y L (x) = sin(8π x). (18)
We can further write y H in terms of y L as
(1 )
y H = x2 + ( y L cos(π /10) + y L sin(π /10)/(8π ))2 , (19)
(1)
where y L denotes the first derivatives of y L . The relation between the low- and high-fidelity data is displayed in Fig. 5(a),
which is rather complicated. The performance of the multi-fidelity DNN for this case will be tested next. To approximate the
high-fidelity function, we select 51 and 16 uniformly distributed sample points as the training data for low- and high-fidelity
values, respectively (Fig. 5(b)). The selected learning rate for all test cases is 0.001. Here, we test two types of inputs for
N N H 2 , i.e., [x, y L (x)] (Method I), and [x, y L (x), y L (x − τ )] (Method II) (τ is the delay). Four hidden layers and 20 neurons
per layer are used in N N L , and 2 hidden layers with 10 neurons per layer are utilized for N N H 2 . As shown in Fig. 5, it is
interesting to find that Method II provides accurate predictions for the high-fidelity values (Fig. 5(d)), while Method I fails
(1)
(Fig. 5(c)). As mentioned in [28], the term y L (x − τ ) can be viewed as an implicit approximation for y L , which enables
Method II to capture the correlation in Eq. (19) based only on a small number of high-fidelity data points. However, given
(1)
that no information on y L is available in Method I, the present datasets are insufficient to obtain the correct correlation.

3.1.5. 20-dimensional function approximation


In principle, the new multi-fidelity DNN can approximate any high-dimensional function so here we take a modest size
so that is not computationally expensive to train the DNN. Specifically, we generate the low- and high-fidelity data for a
20-dimensional function from the following equations: [29]
8 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

Fig. 5. Approximation of continuous function from multi-fidelity data with phase-shifted oscillations and highly-nonlinear correlation: (a) Correlation among
x, y L , and y H . The Blue line represents the projection in the ( y L , y H ) plane. (b) Training data for y L and y H . Black solid line: Exact high-fidelity values;
Black dashed line: Exact low-fidelity values; Red cross: High-fidelity training data; Blue circle: Low-fidelity training data. (c) Predictions from Method I
(without time-delay) (Red dashed line). (d) Predictions from Method II (with time-delay) (Red dashed line). The learned optimal value for τ is 4.49 × 10−2 .

Fig. 6. Approximations of the 20-dimensional function (learning rate: 0.001). (a) Single-fidelity predictions from high-fidelity data. N N H 2 → 4 × 160 with
5000 randomly selected high-fidelity data, and 10000 test data at random locations. (b) Multi-fidelity DNN predictions. N N L → 4 × 128, N N H 2 → 2 × 64
with 30000 and 5000 randomly selected low-/high-fidelity data, and 10000 test data at random locations.

20 
 2
y H (x) = (x1 − 1)2 + 2x2i − xi −1 , xi ∈ [−3, 3], i = 1, 2, ..., 20, (20)
i =2


19
y L (x) = 0.8 y H (x) − 0.4xi xi +1 − 50. (21)
i =1

As shown in Fig. 6(a), using only the available high-fidelity data does not lead to an accurate function approximation but
using the multi-fidelity DNN approach gives excellent results as shown in Fig. 6(b).
In summary, in this section we have demonstrated using different data sets and correlations that multi-fidelity DNNs
can adaptively learn the underlying correlation between the low- and high-fidelity data from the given datasets without
any prior assumption on the correlation. In addition, they can be applied to high-dimensional cases, hence outperforming
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 9

GPR [10]. Finally, the present framework can be easily extended based on the embedding theory to non-functional correla-
tions, which enables multi-fidelity DNNs to learn more complicated nonlinear correlations induced by phase errors of the
low-fidelity data (adversarial data).

3.2. Inverse PDE problems with nonlinearities

In this section, we will apply the multi-fidelity PINNs (MPINNs) to two inverse PDE problems with nonlinearities, specif-
ically, unsaturated flows and reactive transport in porous media, which have extensive applications in various fields, such
as contaminant transport in soil, CO2 sequestration, and oil recovery. We assume that the hydraulic conductivity is first
estimated based on scarce high-fidelity measurements of the pressure head. Subsequently, the reactive models are further
learned given a small set of high-fidelity observations of the solute concentration.

3.2.1. Learning the hydraulic conductivity for nonlinear unsaturated flows


Unsaturated flows play an important role in the ground-subsurface water interaction zone [30,31]. Here we consider a
steady unsaturated flow in an one-dimensional (1D) column with a variable water content, which can be described by the
following equation as

∂x ( K (h)∂x h) = 0. (22)
We consider two types of boundary conditions, i.e., (1) constant flux at the inlet and constant pressure head at the outlet,
q = − K ∂x h = q0 , x = 0; h = h1 , x = L x (Case I), and (2) constant pressure head at both the inlet and outlet, h = h0 , x =
0; h = h1 , x = L x (Case II). Here L x = 200cm is the length of the column, h is the pressure head, h0 and h1 are, respectively,
the pressure head at the inlet and outlet, q represents the flux, and q0 is the flux at the inlet, which is a constant. In
addition, K (h) denotes the pressure-dependent hydraulic conductivity, which is expressed as

1/2 1/m m 2
K (h) = K s S e 1 − (1 − S e ) , (23)

where K s is the saturated hydraulic conductivity, and S e is the effective saturation that is a function of h. It is noted that
several models have been developed to characterize S e but among them, the van Genuchten model is the most widely used
[32], which reads as follows:
1
Se = , m = 1 − 1/n. (24)
(1 + |α0 h|n )m
In Eq. (24), α0 is related to the inverse of the air entry suction, and m represents a measure of the pore-size distribution. To
obtain the velocity field for later applications, we should first obtain the distribution of K (h). Unfortunately, both parameters
depend on the geometry of porous medium and are difficult to measure directly. We note that the pressure head can be
measured more easily in comparison to α0 and m. Therefore, we assume that partial measurements of h are available
without the direct measurements of α0 and m. The objective is to estimate α0 and m based on the observations of h. Then,
we can compute the distribution of K (h) according to Eqs. (23) and (24).
The loam is selected as a representative case here, for which the empirical ranges of α0 and m are: α0 (cm−1 ) ∈
[0.015, 0.057] and m ∈ [0.31, 0.40] [33]. In addition, K s = 1.04cm/hr. To obtain the training data for neural networks, two
types of numerical simulations are conducted to generate the low- and high-fidelity data using the bvp4c in Matlab (uni-
form lattice with δx = 1/15cm). For high-fidelity data, the exact values for α0 and m are assumed to be 0.036 cm−1 and
0.36. The high-fidelity simulations are then conducted using the exact values of α0 and m. Different initial guesses for α0
and m are employed in the low-fidelity simulations. Specifically, ten uniformly distributed pairs i.e., (α0 , m) in the range
(0.015, 0.31) − (0.057, 0.40) are adopted in the low-fidelity simulations. For all cases, 31 uniformly distributed sampling
data at the low-fidelity level are served as the training data, 2 sampling points are employed as the training data for high-
fidelity, and 400 randomly sampled points are used to measure the M S E f e . In addition, a smaller learning rate, i.e., 10−4 is
employed for all test cases in this section.
We first consider the flow with constant flux inlet. The flux at the inlet and the pressure at the outlet are set as
q0 = 0.01cm/ y and h1 = −20cm, respectively. Equation (22) is added into the last neural network in MPINNs. We first
employ the numerical results for α0 = 0.055 and m = 0.4 as the low-fidelity data. As shown in Fig. 7(d), the prediction for
hydraulic conductivity is different from the exact solution. According to Darcy’s law, we can rewrite Eq. (22) as

q(x) = − K ∂x h, ∂x q(x) = 0. (25)


Considering that q = q0 at the inlet is a constant, we can then obtain the following equation

q(x) = − K ∂x h = q0 , (26)
which actually is the mass conservation at each cross section. We then employ Eq. (26) instead of Eq. (22) in the MPINNs,
and the results improve greatly (Fig. 7(d)).
10 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

Fig. 7. Predictions for unsaturated flow in porous media using the differential (Eq. (22)) and integral formulations (Eq. (26)) with constant flux at the inlet
and constant pressure head at the outlet. (a) Training data for pressure head. (b) Low- and high-fidelity hydraulic conductivity. (c) Predicted pressure head
using MPINNs training with multi-fidelity data. Method I: Differential formulation, Method II: Integral formulation. (d) Predicted hydraulic conductivity
using MPINNs training with multi-fidelity data. Method I: Differential formulation, Method II: Integral formulation.

We proceed to study this case in some more detail. We perform the single-fidelity modeling (SF) based on the high-
fidelity data. We use two hidden layers with 20 neurons per layer in N N H 2 , in which the hyperbolic tangent function is
employed as the activation function. The learned pressure head and the hydraulic conductivity are shown in Figs. 8(a)-8(b).
We observe that both the learned h and K (h) disagree with the exact results. We then switch to multi-fidelity modeling.
Two hidden layers and 10 neurons per layer are used in N N L , and two hidden layers with 10 neurons per layer are utilized
for N N H 2 . The predicted pressure head as well as the hydraulic conductivity (average value from ten runs with different
initial guesses) agree quite well with the exact values (Figs. 8(c)-8(d)). For Case II, we set the pressure head at the inlet
and outlet as h0 = −3cm and h1 = −10cm. We also assume that the flux at the inlet is known, thus Eq. (26) can also be
employed instead of Eq. (22) in the MPINNs. The training data are illustrated in Fig. 9(a). The size of the NNs here is kept
the same as that used in Case I. We observe that results for the present case (Figs. 9(c)-9(f)) are quite similar with those in
Case I.
Finally, the mean values of α0 as well as the m for different initial guesses are shown in Table 2, which indicates that
the MPINNs can significantly improve the prediction accuracy as compared to the estimations based on the high-fidelity
only (SF in Table 2).

3.2.2. Estimation of reaction models for reactive transport


We further consider a single irreversible chemical reaction in a 1D soil column with a length of 5m, which is similar as
the case in [5] and can be expressed as

ar A → B , (27)

where A, and B are different solute. The above reactive transport can be described by the following advection-dispersion-
reaction equation as

∂t (ψ C i ) + q∂x C i = ψ D ∂x2 C i − ψ v i k f ,r C aAr , (i = A , B ), (28)

where C i (mol/ L ) is the concentration of any solute, q is the Darcy velocity, ψ is the porosity, D is the dispersion coeffi-
cient, k f ,r denotes the chemical reaction rate, ar is the order of the chemical reaction, both of which are difficult to measure
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 11

Fig. 8. Predictions for unsaturated flow in porous media using the integral formulation (Eq. (26)) with constant flux at the inlet and constant pressure
head at the outlet. (a) Predicted pressure head using PINNs training with high-fidelity data only. (b) Predicted hydraulic conductivity using PINNs training
with high-fidelity data only. (c) Predicted pressure head using MPINNs with multi-fidelity data. (d) Predicted hydraulic conductivity using MPINNs with
multi-fidelity data.

Table 2
PINN and MPINN predictions for hydraulic conductivity.

α0 (cm−1 ) σ (α0 ) m σ (m)


SF (Case I) 0.0438 - 0.359 -
MF (Case I) 0.0344 0.0027 0.347 0.0178
SF (Case II) 0.0440 - 0.377 -
MF (Case II) 0.0337 7.91 × 10−4 0.349 0.0037
Exact 0.036 - 0.36 -

directly, and v i is the stoichiometric coefficient with v A = ar , and v B = −1. Here, we assume that the following parame-
ters are known: ψ = 0.4, q = 0.5m/ y, and D = 10−8 m/s2 . The initial and boundary conditions imposed on the solute are
expressed as

C A (x, 0) = C B (x, 0) = 0, (29)


C A (0, t ) = 1, C B (0, t ) = 0, (30)
∂x C i (x, t )|x=lx = 0. (31)
The objective here is to learn the effective chemical reaction rate as well as the reaction order based on partial observations
of the concentration field C A (x, t ).
We perform lattice Boltzmann simulations [34,35] to obtain the training data since we have no experimental data.
Consider that v A is a constant, we define an effective reaction rate as k f = v A k f ,r for simplicity. The exact effective reaction
rate and reaction order are assumed to be k f = 1.577/ y and ar = 2, respectively. Numerical simulations with the exact
k f and ar are then conducted to obtain the high-fidelity data. In simulations, a uniform lattice is employed, i.e., l x =
400δx , where δx = 0.0125m is the space step, and δt = 6.67 × 10−4 y is the time step size. We assume that the sensors
for concentration are located at x = {0.625, 1.25, 2.5, 3.75}m. In addition, we assume that the data are collected from the
sensors once half a year. In particular, we employ two different datasets (Fig. 10), i.e., (1) t = 0.5 and 1 years (Case I), and
(2) t = 0.25 and 0.75 years. Schematics of the training data points for the two cases we consider are shown in Fig. 10(a)
and Fig. 10(b).
12 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

Fig. 9. Predictions for unsaturated flow in porous media using the integral formulation (Eq. (26)) with constant pressure head at the inlet and outlet. (a)
Training data for pressure head. Low-fidelity data is computed with α0 = 0.015 and m = 0.31. (b) Low- and high-fidelity hydraulic conductivity. Low-fidelity
hydraulic conductivity is computed with α0 = 0.015 and m = 0.31. (c) Predicted pressure head using PINNs training with high-fidelity data only. (d)
Predicted hydraulic conductivity using PINNs training with high-fidelity data only. (e) Predicted pressure head using MPINNs with multi-fidelity data. (f)
Predicted hydraulic conductivity using MPINNs with multi-fidelity data.

Fig. 10. Schematic of the space-time domain and the locations of the high-fidelity data for modeling reactive transport. (a) Case I: Data are collected at
t = 0.5 and 1 years. (b) Case II: Data are collected at t = 0.25 and 0.75 years.
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 13

Fig. 11. Predicted concentration field. (a) Case I: Relative errors (absolute value) using a PINN trained on high-fidelity data only. N N H 2 → 4 × 20. (b) Case I:
Mean relative errors (absolute value) using a MPINN trained on multi-fidelity data. Initial guesses: ten uniformly distributed pairs in [0.75k f , 0.75a] −
[1.25k f , 1.25a]. The concentration fields plotted are the mean values for ten runs with different initial guesses. N N L → 2 × 10, N N H 2 → 2 × 10. (c) Case
II: Relative errors (absolute value) using a PINN trained on high-fidelity data only. N N H 2 → 4 × 20. (d) Case II: Mean relative errors (absolute value) using
a MPINN trained on multi-fidelity data. N N L → 2 × 10, N N H 2 → 2 × 10.

Table 3
PINN and MPINN predictions for reactive transport.

k f (/ y ) σ (k f ) ar σ (ar )
SF (Case I) 0.441 - 0.558 -
MF (Case I) 1.414 7.45 × 10−3 1.790 9.44 × 10−3
SF (Case II) 1.224 - 1.516 -
MF (Case II) 1.557 2.14 × 10−2 1.960 2.57 × 10−2
Exact 1.577 - 2 -

Next, we describe how we obtain the low-fidelity data. In realistic applications, the pure chemical reaction rate (without
porous media) between different solute e.g., A and B are known, which can be served as the initial guess for k f . Here we
assume that the initial guess for the chemical reaction rate and reaction order vary from 0.75k f /ar to 1.25k f /ar . To study
the effect of the initial guess ((k f ,0 , ar0 )) on the predictions, we conduct the low-fidelity numerical simulations based on ten
uniformly distributed pairs in [0.75k f , 0.75ar ] − [1.25k f , 1.25ar ] using the same grid size and time step as the high-fidelity
simulations. Here k f ,0 and ar0 represent the initial guesses for k f and ar . The learning rate employed in this section is also
10−4 . In addition, 30,000 randomly sampled points are employed to measure the M S E f e .
The results of predictions using PINNs (with the hyperbolic tangent activation function) trained on high-fidelity data are
shown in Fig. 11(a) and Fig. 11(c) for the two cases we consider, and corresponding results using MPINNs are shown in
Fig. 11(b) and Fig. 11(d). The estimated mean and standard deviation for k f and ar are displayed in Table 3, which are
much better than the results from single-fidelity modelings. We also note that the standard deviations are rather small,
which demonstrates the robustness of the MPINNs.

4. Conclusion

In this work we presented a new composite deep neural network that learns from multi-fidelity data, i.e. a small set of
high-fidelity data and a larger set of inexpensive low-fidelity data. This scenario is prevalent in many cases for modeling
physical and biological systems and we expect that the new DNN will provide solutions to many current bottlenecks where
availability of large data sets of high-fidelity is simply not possible but either low-fidelity data from inexpensive sensors or
other modalities or even simulated data can be readily obtained. Moreover, we extended the concept of physics-informed
neural networks (PINNs) that use a single-fidelity data to train to the multi-fidelity case and MPINNs. Specifically, MPINNs
are composed of four fully connected neural networks: the first neural network approximates the low-fidelity data, while
the second and third NNs are for constructing the correlations between the low- and high-fidelity data, and the last NN
encodes the PDEs that describe the corresponding physical problems. The two sub-networks included in the high-fidelity
network are employed to approximate the linear and nonlinear parts of the correlations, respectively. Training the two
sub-networks enables the MPINNs to learn the correlation based on the training data without any prior assumption on the
relation between the low- and high-fidelity data.
14 X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020

MPINNs have the following attractive features: (1) Owing to the expressible capability of function approximation of
the NNs, multi-fidelity NNs are able to approximate both continuous and discontinuous functions in high dimensions; (2)
Due to the fact that NNs can handle almost any kind of nonlinearities, MPINNs are effective for identification of unknown
parameters or functions in inverse problems described by nonlinear PDEs.
We first tested the new multi-fidelity DNN in approximating continuous and discontinuous functions with linear and
nonlinear correlations. Our results demonstrated that the present model can adaptively learn the correlations between
the low- and high-fidelity data based on the training data of variable fidelity. In addition, this model can easily be ex-
tended based on the embedding theory to learn more complicated nonlinear and non-functional correlations. We then
tested MPINNs on inverse PDE problems, namely, in estimating the hydraulic conductivity for unsaturated flows as well as
the reaction models in reactive transport in porous media. We found that the proposed new MPINN can identify the un-
known parameters or even functions with high accuracy using very few high-fidelity data, which is promising in reducing
the high experimental cost for collecting high-fidelity data. Finally, we point out that MPINNs can also be employed for
high-dimensional problems as well as problems with multiple fidelities, i.e. more than two fidelities.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.

Acknowledgement

This work was supported by the PHILMS grant DE-SC0019453, the DOE-BER grant DE-SC0019434, the AFOSR grant
FA9550-17-1-0013, and the DARPA-AIRA grant HR00111990025. In addition, we would like to thank Dr. Guofei Pang, Dr.
Zhen Li, Dr. Zhiping Mao, and Ms Xiaoli Chen for their helpful discussions.

Appendix A. Data-driven manifold embeddings

To learn more complicated non-linear correlation between the low- and high-fidelity data, we can further link the multi-
fidelity DNNs with the embedding theory [28]. According to the weak Whitney embedding theorem [36], any continuous
function from an n-dimensional manifold to an m-dimensional manifold may be approximated by a smooth embedding with
m > 2n. Using this theorem, Taken’s theorem [37] further points out that the m embedding dimensions can be composed of
m different observations of the system state variables or m time delays for a single scalar observable.
Now we will introduce the applications of the two theorems in multi-fidelity modelings. We assume that both y L and
y H are smooth functions. Suppose that y L , y L (x − τ ), ..., y L (x − (m − 1)τ ) (τ is the time delay) and a small number of
(x, y H ) are available, we can then express y H in the following form

y H (x) = F (x, y L (x), y L (x − i τ )), i = 1, ..., m − 1. (A.1)

By using this formulation, we can construct more complicate correlation than Eq. (2). To link the multi-fidelity DNN with the
embedding theory, we can extend the inputs for N N H ,i to higher dimensions, i.e., [x, y L (x)] → [x, y L (x), y L (x − τ ), y L (x −
2τ ), ..., y L (x − (m − 1)τ )], which enables the multi-fidelity DNN to discover more complicated underlying correlations be-
tween the low- and high-fidelity data.
Note that the selection of optimal value for the time delay τ is important in embedding theory [38–40], on which
numerous studies have been carried out [38]. However, most of the existing methods for determining the optimal value
of τ appear to be problem-dependent [38]. Recently, Dhir et al. proposed a Bayesian delay embedding method, where τ is
robustly learned from the training data by employing the variational autoencoder [40]. In the present study, the value of τ
is also learned by optimizing the NNs rather than setting it as constant as in the original work presented in Ref. [28].

References

[1] N.M. Alexandrov, R.M. Lewis, C.R. Gumbert, L.L. Green, P.A. Newman, Approximation and model management in aerodynamic optimization with
variable-fidelity models, J. Aircr. 38 (6) (2001) 1093–1101.
[2] D. Böhnke, B. Nagel, V. Gollnick, An approach to multi-fidelity in conceptual aircraft design in distributed design environments, in: 2011 Aerospace
Conference, IEEE, 2011, pp. 1–10.
[3] L. Zheng, T.L. Hedrick, R. Mittal, A multi-fidelity modelling approach for evaluation and optimization of wing stroke aerodynamics in flapping flight,
J. Fluid Mech. 721 (2013) 118–154.
[4] N.V. Nguyen, S.M. Choi, W.S. Kim, J.W. Lee, S. Kim, D. Neufeld, Y.H. Byun, Multidisciplinary unmanned combat air vehicle system design using multi-
fidelity model, Aerosp. Sci. Technol. 26 (1) (2013) 200–210.
[5] H. Chang, D. Zhang, Identification of physical processes via combined data-driven and data-assimilation methods, J. Comput. Phys. 393 (2019) 337–350.
[6] M.C. Kennedy, A. O’Hagan, Predicting the output from a complex computer code when fast approximations are available, Biometrika 87 (1) (2000)
1–13.
[7] A.I. Forrester, A. Sóbester, A.J. Keane, Multi-fidelity optimization via surrogate modelling, Proc., Math. Phys. Eng. Sci. 463 (2088) (2007) 3251–3269.
[8] M. Raissi, G.E. Karniadakis, Deep multi-fidelity Gaussian processes, arXiv preprint arXiv:1604.07484, 2016.
X. Meng, G.E. Karniadakis / Journal of Computational Physics 401 (2020) 109020 15

[9] M. Raissi, P. Perdikaris, G.E. Karniadakis, Inferring solutions of differential equations using noisy multi-fidelity data, J. Comput. Phys. 335 (2017)
736–746.
[10] P. Perdikaris, M. Raissi, A. Damianou, N. Lawrence, G.E. Karniadakis, Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling,
Proc., Math. Phys. Eng. Sci. 473 (2198) (2017) 20160751.
[11] L. Bonfiglio, P. Perdikaris, G. Vernengo, J.S. de Medeiros, G.E. Karniadakis, Improving swath seakeeping performance using multi-fidelity Gaussian
process and Bayesian optimization, J. Ship Res. 62 (4) (2018) 223–240.
[12] K.J. Chang, R.T. Haftka, G.L. Giles, I.J. Kao, Sensitivity-based scaling for approximating structural response, J. Aircr. 30 (2) (1993) 283–288.
[13] R. Vitali, R.T. Haftka, B.V. Sankar, Multi-fidelity design of stiffened composite panel with a crack, Struct. Multidiscip. Optim. 23 (5) (2002) 347–356.
[14] M. Eldred, Recent advances in non-intrusive polynomial chaos and stochastic collocation methods for uncertainty analysis and design, in: 50th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 17th AIAA/ASME/AHS Adaptive Structures Conference 11th AIAA
No, 2009, p. 2274.
[15] A.S. Padron, J.J. Alonso, M.S. Eldred, Multi-fidelity methods in aerodynamic robust optimization, in: 18th AIAA Non-Deterministic Approaches Confer-
ence, 2016, 0680.
[16] J. Laurenceau, P. Sagaut, Building efficient response surfaces of aerodynamic functions with Kriging and Cokriging, AIAA J. 46 (2) (2008) 498–507.
[17] E. Minisci, M. Vasile, Robust design of a reentry unmanned space vehicle by multi-fidelity evolution control, AIAA J. 51 (6) (2013) 1284–1295.
[18] P. Lancaster, K. Salkauskas, Surfaces generated by moving least squares methods, Math. Comput. 37 (155) (1981) 141–158.
[19] D. Levin, The approximation power of moving least-squares, Math. Compet. 67 (224) (1998) 1517–1531.
[20] M.G. Fernández-Godino, C. Park, N.H. Kim, R.T. Haftka, Review of multi-fidelity models, arXiv preprint arXiv:1609.07196, 2016.
[21] H. Babaee, P. Perdikaris, C. Chryssostomidis, G.E. Karniadakis, Multi-fidelity modelling of mixed convection based on experimental correlations and
numerical simulations, J. Fluid Mech. 809 (2016) 895–917.
[22] Q. Zheng, J. Zhang, W. Xu, L. Wu, L. Zeng, Adaptive multi-fidelity data assimilation for nonlinear subsurface flow problems, Water Resour. Res. 55
(2018) 203–217.
[23] M. Raissi, P. Perdikaris, G.E. Karniadakis, Physics-informed neural networks: a deep learning framework for solving forward and inverse problems
involving nonlinear partial differential equations, J. Comput. Phys. 378 (2019) 686–707.
[24] M. Raissi, A. Yazdani, G.E. Karniadakis, Hidden fluid mechanics: a Navier-Stokes informed deep learning framework for assimilating flow visualization
data, arXiv preprint arXiv:1808.04327, 2018.
[25] A.M. Tartakovsky, C.O. Marrero, D. Tartakovsky, D. Barajas-Solano, Learning parameters and constitutive relationships with physics informed deep neural
networks, arXiv preprint arXiv:1808.03398, 2018.
[26] D. Zhang, L. Lu, L. Guo, G.E. Karniadakis, Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic
problems, J. Comput. Phys. 397 (2019) 108850.
[27] Z.-Q.J. Xu, Y. Zhang, T. Luo, Y. Xiao, Z. Ma, Frequency principle: Fourier analysis sheds light on deep neural networks, arXiv preprint arXiv:1901.06523.
[28] S. Lee, F. Dietrich, G.E. Karniadakis, I.G. Kevrekidis, Linking Gaussian process regression with data-driven manifold embeddings for nonlinear data
fusion, arXiv preprint arXiv:1812.06467, 2018.
[29] S. Shan, G.G. Wang, Metamodeling for high dimensional simulation-based design problems, J. Mech. Des. 132 (5) (2010) 051009.
[30] S.L. Markstrom, R.G. Niswonger, R.S. Regan, D.E. Prudic, P.M. Barlow, Gsflow-Coupled Ground-Water and Surface-Water Flow Model Based on the
Integration of the Precipitation-Runoff Modeling System (PRMS) and the Modular Ground-Water Flow Model (MODFLOW-2005), US Geological Survey
Techniques and Methods, vol. 6, 2008, p. 240.
[31] M. Hayashi, D.O. Rosenberry, Effects of ground water exchange on the hydrology and ecology of surface water, Groundwater 40 (3) (2002) 309–316.
[32] M.T. Van Genuchten, A closed-form equation for predicting the hydraulic conductivity of unsaturated soils, Soil Sci. Soc. Am. J. 44 (5) (1980) 892–898.
[33] R.F. Carsel, R.S. Parrish, Developing joint probability distributions of soil water retention characteristics, Water Resour. Res. 24 (5) (1988) 755–769.
[34] X. Meng, Z. Guo, Localized lattice Boltzmann equation model for simulating miscible viscous displacement in porous media, J. Heat Mass Transf. 100
(2016) 767–778.
[35] B. Shi, Z. Guo, Lattice Boltzmann model for nonlinear convection-diffusion equations, Phys. Rev. E 79 (1) (2009) 016701.
[36] H. Whitney, Differentiable manifolds, Ann. Math. (1936) 645–680.
[37] F. Takens, Detecting strange attractors in turbulence, in: Dynamical Systems and Turbulence, Warwick, 1980, Springer, 1981, pp. 366–381.
[38] R. Hegger, H. Kantz, T. Schreiber, Practical implementation of nonlinear time series methods: the tisean package, Chaos 9 (2) (1999) 413–435.
[39] H.D. Abarbanel, R. Brown, J.J. Sidorowich, L.S. Tsimring, The analysis of observed chaotic data in physical systems, Rev. Mod. Phys. 65 (4) (1993) 1331.
[40] N. Dhir, A.R. Kosiorek, I. Posner, Bayesian delay embeddings for dynamical systems, in: Conference on Neural Information Processing Systems, 2017.

You might also like