DeepM&Mnet solving multiphysics and multiscale problems [J. Comput. Phys.]-1
DeepM&Mnet solving multiphysics and multiscale problems [J. Comput. Phys.]-1
a r t i c l e i n f o a b s t r a c t
Article history: Electroconvection is a multiphysics problem involving coupling of the flow field with
Available online 22 March 2021 the electric field as well as the cation and anion concentration fields. Here, we use
electroconvection as a benchmark problem to put forward a new data assimilation
Keywords:
framework, the DeepM&Mnet, for simulating multiphysics and multiscale problems at
Mutiphysics
Multiscale modeling
speeds much faster than standard numerical methods using pre-trained neural networks.
Deep learning We first pre-train DeepONets that can predict independently each field, given general
Data assimilation inputs from the rest of the fields of the coupled system. DeepONets can approximate
Operator approximation nonlinear operators and are composed of two sub-networks, a branch net for the input
DeepONet fields and a trunk net for the locations of the output field. DeepONets, which are extremely
fast, are used as building blocks in the DeepM&Mnet and form constraints for the
multiphysics solution along with some sparse available measurements of any of the fields.
We demonstrate the new methodology and document the accuracy of each individual
DeepONet, and subsequently we present two different DeepM&Mnet architectures that
infer accurately and efficiently 2D electroconvection fields for unseen electric potentials.
The DeepM&Mnet framework is general and can be applied for building any complex
multiphysics and multiscale models based on very few measurements using pre-trained
DeepONets in a “plug-and-play” mode.
© 2021 Elsevier Inc. All rights reserved.
1. Introduction
Recently, deep learning techniques have been introduced in modeling diverse fluid mechanics problems [1–3]. A promis-
ing direction is focused on the surrogate modeling of complex fluid systems, which are typically described by partial
differential equations (PDEs). For example, Raissi et al. [4] applied physics-informed neural networks (PINNs) to obtain
accurate solutions of a wide class of PDEs. The governing equations are embedded in PINNs and form the constraints of the
solutions. Similar approaches for scientific machine learning have been proposed in a number of recent works [5–7]. Follow-
ing the framework of [4], a number of learning-based methods were proposed to simulate different types of fluid mechanics
problems [8–11]. In addition to solving forward problems, neural networks are also used to identify the unknown coefficient
* Corresponding authors.
E-mail addresses: [email protected] (S. Cai), [email protected] (G.E. Karniadakis).
1
The authors contributed equally to this work.
https://doi.org/10.1016/j.jcp.2021.110296
0021-9991/© 2021 Elsevier Inc. All rights reserved.
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
values of PDEs from data [12–16]. A typical example is to learn the Reynolds number of Navier-Stokes equations based on
some measurements of flow [17,18]. This is a data-driven inverse problem, which is not easy to address by conventional
computational fluid mechanics (CFD) methods.
Overall, the recent applications of deep learning to physics modeling are based on the universal approximation theorem
stating that neural networks (NNs) can be used to approximate any continuous function [19]. However, there are other
approximation theorems stating that a neural network can approximate accurately any continuous nonlinear functional
[20] or operator (a mapping from a function to another function) [21]. Based on the universal approximation theorem for
operator, Lu et al. [22] proposed a specific network architecture, the deep operator network (DeepONet), to learn both
explicit as well as implicit operators, e.g., in the form of PDEs. DeepONet, which is designed based on rigorous theory, is
composed of two sub-networks, the branch net for the input function and the trunk net for the coordinate to evaluate the
output function. In [22], the authors presented theoretical analysis and demonstrated various examples to show the high
accuracy and high efficiency of DeepONets. Inspired by the work of [22], here we first pose the related question: “Can
DeepONet learn the mapping between two coupled state variables for complex fluid systems?” If this is the case, we ask:
“Can the pre-trained DeepONets be combined in a simplified data assimilation framework, using a single NN, to accelerate
significantly modeling of multiphysics and multiscale flow problems?”
In this paper, our first goal is to demonstrate the effectiveness of DeepONets for multiphysics and multiscale problems in
fluid mechanics. To this end, we use electroconvection as a benchmark problem, which describes electrolyte flow driven by
an electric voltage and involves multiphysics, i.e., the mass, momentum and cation/anion transport as well as electrostatic
fields [23]. Electroconvection is observed in many electrochemical and microfluidic systems that usually have ion selective
interfaces. Since the development of the theoretical formulations of electroconvection by [24,25], which were later confirmed
by experiment in [26], a considerable number of numerical studies has been published [27–29,23]. More recently, several
3D interesting simulations have been presented in [30,31]. The main bottleneck of the simulation of electroconvection is
the computational cost to resolve the electric double layer (EDL), the thickness of which is characterized by a dimensionless
parameter, namely the Debye length ( ). In the aforementioned references, = 10−3 is generally used, which is small
enough to simulate the unsteady flow patterns that exist in real experiments. Using a smaller Debye length (e.g., = 10−4 )
is not necessary as the main flow quantities (e.g., mean flow rate) are weakly sensitive to when it is small [30]. However,
the flow of electroconvection will become steady when a larger value of Debye length (e.g., = 10−2 ) is employed. Here, we
aim to present a proof-of-concept for a new deep learning framework and we only consider the steady-state electronvection
(i.e., = 10−2 ) as a benchmark multiphysics problem in this paper.
To apply DeepONets to electroconvection, we train DeepONets to predict each field independently by giving inputs from
the rest of the fields. Upon supervised training with labeled data, the DeepONets can predict very accurately and efficiently
the target field. There are some existing learning-based methods that can perform similar input-output mappings. In partic-
ular, the most common algorithm is the convolutional neural networks (CNNs), which are used to learn mapping between
two variables when the input and output functions are treated as images [32–35]. For example, Kim and Lee [35] used a
CNN to predict the wall-normal heat flux from the wall-shear stresses and pressure. However, this kind of approach is lim-
ited to problems where the input and output are represented by 2D maps with equispaced grids. On the contrary, it is not
necessary for DeepONets to include all the grid values inside the computational domain. The training data for DeepONets
can be sparsely and randomly distributed, making it agnostic to the geometry.
In addition to the application of DeepONets, another objective in this paper is to put forward a new framework, the
DeepM&Mnet, for simulating general multiphysics and multiscale problems. To use the DeepONets for prediction of an
independent condition, at least one of the electroconvection fields is required. However, this is not realistic in general.
Therefore, we develop the DeepM&Mnet, which can integrate several pre-trained DeepONets with a few measurements from
any of the fields to produce the full fields of the coupled multiphysics system. To this end, we develop two architectures
of DeepM&Mnet. One of them, the parallel DeepM&Mnet, is illustrated in Fig. 1. In the context of DeepM&Mnet, a neural
network, which takes the spatial coordinates as inputs, is applied to approximate the multiphysics solutions. In other words,
the neural network is the surrogate model of the multiphysics electroconvection, where the pre-trained DeepONets are
used as building blocks and form the constraints for the solutions. Such proposed DeepM&Mnet can be considered as a
framework assimilating the data measurements and the physics-related operators. We note that there are methods, such as
PDE-constrained optimization with adjoint- or ensemble-variational techniques, that can address similar problems in diverse
applications of fluid mechanics [36–41]. However, the DeepM&Mnet framework is much more efficient compared to them
since it avoids solving the PDE system numerically, which is required in conventional data assimilation methods.
The paper is organized as follows. In Section 2, we introduce the numerical simulation of the electroconvection problem
and the data preparation. In Section 3, we introduce the concept of DeepONet and train several DeepONets to predict sep-
arately each field accurately. In Section 4, we propose two architectures of DeepM&Mnet, namely the parallel DeepM&Mnet
and series DeepM&Mnet, and then evaluate the performance of these new networks. Finally, we conclude the paper in Sec-
tion 5. We also present some details on the implementations of DeepONet and DeepM&Mnet, e.g., the choice of the training
loss, regularization issues, etc., in the appendices.
2
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 1. DeepM&Mnet for 2D electroconvection problem: schematic of the parallel architecture. Here, the neural network (NN), which is used to approximate
the multiphysics solutions, is a fully-connected network with trainable parameters. The pre-trained DeepONets are fixed (not trainable) and they are
considered as the constraints of the NN outputs.
We developed an eletroconvection solver using the high-order spectral element method [42]. Following the numerical
study in [27], here the governing equations in dimensionless form, including the Stokes equations, the electric potential and
the ion transport equations, are given as follows:
∂u
= −∇ p + ∇ 2 u + fe , (1a)
∂t
∇ · u = 0, (1b)
−2 2 ∇ 2 φ = ρe , (1c)
∂ c±
= −∇ · J± , (1d)
∂t
J± = c ± u − ∇ c ± ∓ c ± ∇φ. (1e)
Here, u is the velocity vector, p is the pressure field and φ is the electric potential. Moreover, c + and c − are the cation
and anion concentrations, respectively. Also, ρe = z(c + − c − ) is the free charge density with ionic valence z = 1, f e =
−κρe ∇φ/2 2 is the electrostatic body force, where is the Debye length; κ is the electrohydrodynamic coupling constant
and its value is 0.5 throughout this paper.
We consider electroconvection between an ion-selective surface and a stationary reservoir subject to an applied voltage
. As shown in the Fig. 2, the simulation domain is a rectangle of size [−3, 3] × [0, 1]. On the top surface ( y = 1), no-slip
and no-penetration wall boundary condition (u = 0) for the Stokes equations, constant ion concentrations (c + = c − = 1) and
constant potential (φ = ) are imposed. The bottom surface ( y = 0) is also a solid wall, and it is assumed to be imper-
meable to anions ( J − = 0), but permeable to cations whose concentration is maintained at c + = 2. The potential on the
bottom surface is φ = 0. Periodic boundary conditions are used on both the right (x = 3) and left (x = −3) boundaries. The
electroconvection solver is developed based on our spectral element library NekTar, where the spectral element is used to
discretize the governing equations in space and the second-order stiffly-stable scheme is employed for time advancing [42].
To validate the spectral element electroconvection solver, we first compare our simulation results with those in the lit-
erature. Following [27], we performed nine cases of 2D simulations at a fixed Debye length = 10−3 , while = 25, 30,
35, 40, 50, 55, 80, 100 and 120, respectively, where and are the key parameters determining the flow of eletrocon-
vection system. The computational domain consists of 48 × 64 quadrilateral elements, which are uniformly distributed in
3
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 3. Spectral element simulation results of the current density (I ) on the top wall and validation against the simulations of [27]. (a) x-averaged I as a
function of time (t): black line: applied voltage = 25; blue line: = 40; green line: = 80; red line: = 120. (b) Time and x-averaged < I >
as a function of : red circles: NekTar simulation; blue triangles: simulation results of [27]. (For interpretation of the colors in the figure(s), the reader is
referred to the web version of this article.)
the x direction, but clustered in the vicinity of the walls. The smallest element size in the y direction is 0.0011. Same as
the procedure in [27], the initial condition is generated by solving the governing equations without flow, and randomly
perturbing the resulting concentration fields locally by 1%. Moreover, the time step is δt = 5 × 10−7 , and 4 spectral element
modes (3rd order polynomials) and 7 quadrature points (per direction) are used in all the validation cases. All the simula-
tions stopped after t > 0.3 and convergence was verified. The collection of the statistical samples began at t = 0.15 when
the current on the top surface entered into a statistically stationary state. Fig. 3 shows the comparisons of current spectral
element solutions to those of [27]. Same as the finding in [27], it can be observed from Fig. 3(a) that the current (I ) on the
top surface is steady at low voltage = 25, it becomes unsteady when = 40, and it shows chaotic behavior as the
is increased further. Quantitatively, the time averaged current < I > is in good agreement with the results of [27], as
shown in Fig. 3(b).
The simulations described above with = 10−3 , which mostly involve unsteady flows for different , are used to
validate our CFD solver. However, as mentioned before, we mainly focus on the steady-state electroconvection problem
and use it to put forward the new data assimilation framework in this paper. To this end, we performed simulations at
= 10−2 with varying in the range ∈ [5, 75] systematically, which form the training data for deep learning. The
flow is steady under these conditions. Note that the computational domain is the same as that used in the validation, but
the number of elements is reduced to 32 × 32. Also since a bigger is used, the smallest element size in y direction is
increased to 0.0082, the time step is increased to δt = 10−5 , the number of spectral element modes is 3 and the number of
quadrature points in each element in each direction is 5. All the simulations stopped at t = 2.5 when the time is sufficient
long for the current on the top surface to reach steady state. For each simulation condition, we save the 2D snapshots of φ ,
u, v, p and c ± at time instant t = 2.5, which will be used as the training data.
In this section, we describe the DeepONets that will be used as the building blocks in DeepM&Mnets. The DeepONet
was proposed in [22] for learning general nonlinear operators, including different types of PDEs. Let G be an operator
mapping from a space of functions to another space of functions. In this study, G is represented by a DeepONet, which
takes inputs composed of two components: a function U and the location points (x, y ), and outputs G (U )(x, y ). Note that
the selection of input and output functions for a DeepONet is problem-dependent. In this paper, we design six independent
DeepONets according to the physics of electroconvection, which can be divided into two classes. The first one takes the
electric potential φ(x, y ) as input and predicts the velocity, pressure and concentration fields (u , v , p , c + , c − ), which are
denoted as G u , G v , G p , G c + and G c − , respectively. These operators are proposed based on the steady-state formulations
of Equ. (1a) and (1e), i.e., the time derivatives of the equations are zero. In this case, the electric potential specifies the
force term, which determines the transport patterns of the flow and the ion. The second one, represented by G φ , is used to
predict φ by using the cation and anion concentrations, which is determined by Equ. (1c) where φ can be solved when c +
and c − are given. The definitions of these DeepONets are illustrated in Table 1.
In this paper, we apply the “unstacked” architecture for DeepONet, which is composed of a branch network and a
trunk network. DeepONets are implemented in DeepXDE [43], a user-friendly Python library designed for scientific machine
learning. The schematic diagrams of the proposed networks are illustrated in Fig. 4. In this framework, the trunk network
takes the coordinates (x, y ) as input and outputs [t 1 , t 2 , · · · , t p ] T ∈ R p . In addition, the input function, which is represented
by m discrete values (e.g., [φ(x1 , y 1 ), · · · , φ(xm , ym )] T ), is fed into the branch network. Then the two vectors from the branch
and trunk nets are merged together via a dot product to obtain the output function value. We use the fully-connected
4
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Table 1
DeepONets for 2D electroconvection: definitions of the 2D oper-
ators. We propose these operators based on the governing equa-
tions of the investigated problem. For example, the electric po-
tential φ can be solved when c + and c − are given, according to
Equ. (1c).
Fig. 4. DeepONets for 2D electroconvection: schematics of the DeepONet architectures. Left: using the electric potential to predict the velocity, pres-
sure, cation and anion concentrations, i.e., G u , v , p ,c + ,c − : φ → [u , v , p , c + , c − ]; right: using the concentrations to predict the electric potential, i.e.,
G φ : [c + , c − ] → φ .
Table 2
DeepONets for 2D electroconvection: architectures and loss functions of the DeepONets. MSE:
mean squared error; MAPE: mean absolute percentage error.
DeepONets Branch depth Branch width Trunk depth Trunk width Loss
Gφ 2 [300, 200] 3 [100, 200, 200] MSE
Gu 2 [200, 200] 3 [100, 200, 200] MAPE
Gv 2 [200, 200] 3 [100, 200, 200] MAPE
Gp 2 [200, 200] 3 [100, 200, 200] MSE
G c+ 2 [200, 200] 3 [100, 200, 200] MSE
G c− 2 [200, 200] 3 [100, 200, 200] MSE
networks as the sub-networks (i.e., trunk and branch nets) in this study. The numbers of hidden layers and the number of
neurons per layer of these sub-networks are given in Table 2.
The DeepONets are trained by minimizing a loss function, which measures the difference between the labels and NN
predictions. In general, the mean squared error (MSE) is applied:
1
N
MSE = ( V i − V̂ i )2 , (2)
N
i =1
where V represents the predicting variable, V i and V̂ i are the labeled data and the prediction, respectively; N is the number
of training data. Alternatively, the mean absolute percentage error (MAPE) is also considered in the paper:
1 | V i − V̂ i |
N
MAPE = , (3)
N |V i | + η
i =1
where η is a small number to guarantee the stability when V i = 0. The MSE loss works well in most cases, while the
MAPE loss is better for the case where the output has a large range of function values. In this paper, we apply the MSE
loss to the training of G φ, p ,c + ,c − and apply the MAPE loss for G u , v . A comparison between MSE and MAPE applied to G u is
demonstrated in Appendix A, showing the advantage of MAPE loss in dealing with multiscale data.
The training of DeepONets requires datasets with labeled outputs. We apply NekTar to simulate the 2D steady state fields
of electroconvection problem. The computational domain is defined as : x ∈ [−1, 1], y ∈ [0, 1]. As mentioned in Section 2,
5
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 5. DeepONets for 2D electroconvection: demonstration of the training data. (a) 2D steady fields of φ, u and v under various boundary conditions, i.e.,
= 5, 20, 75; (b) 1D profiles of φ(x = −0.5, y ), u (x = −0.5, y ) and c + (x = −0.5, y ) for 15 training states with = 5, 10, . . . , 75. The arrow represents
the increasing direction of .
different steady-state patterns can be produced by modifying the electric potential difference between the boundaries.
The flow fields of φ(x, y ), c + (x, y ), c − (x, y ) and u (x, y ), v (x, y ), p (x, y ) are collected. By modifying the boundary conditions
of φ , namely using = 5, 10, . . . , 75, we generate 15 steady states for this electroconvection problem. The 2D snapshots
at various values of and some 1D profiles of φ(x = −0.5, y ), u (x = −0.5, y ), c + (x = −0.5, y ) are demonstrated in Fig. 5.
Note that throughout this paper, the data are dimensionless and normalized for enhanced stability in the DeepONet training.
In particular,
where and are a priori known. As shown in Fig. 5, we can observe that the flow pattern varies significantly with
different . Moreover, the magnitude of normalized velocity ranges from the order of 10−4 to 100 , showing the multiscale
nature of this electroconvection problem.
For each 2D input field, we have 21 × 11 uniformly-distributed sensors to represent the function. As for the corresponding
output fields, we randomly select 800 data points in the space for each state variable. In this context, we have N = 15 ×
800 = 12000 training data points in all, where one data item is a triplet. For example, for the DeepONet G u , one data
item is [φ, (x, y ), u (x, y )]. It is worth noting again, the training data, which are not necessarily located on an equispaced
grid, can be selected randomly in the computational domain. Moreover, it is possible to include the sparse experimental
measurements from sensors in the dataset. These advantages show the flexibility when preparing the data for DeepONets
training. We also use NekTar to generate fields under two additional conditions, namely = 13.4 and = 62.15, which
are not included in the training dataset and are used for testing and validation.
To train the DeepONets, we apply the Adam optimizer with a small learning rate 2 × 10−4 , and the networks are trained
over 500,000 iterations. The activation function of the neural network is ReLU. The losses of the training data and testing
data during training process are presented in Fig. 6, where we can find the losses all converge to small values. Note that we
6
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 6. DeepONets for 2D electroconvection: training and testing losses of G φ,u , v , p ,c + ,c − . Note that we use MSE loss for training G φ, p ,c + ,c − and MAPE loss
for G u , v , thus the magnitudes of the loss functions are different.
apply MAPE loss function to G u , v , and thus the magnitude of their losses is different from the others. Upon training, these
DeepONets can predict all fields accurately when the input functions are given.
Here, we demonstrate the predictions of the well-trained DeepONets for the new case (not included in the training set)
corresponding to = 62.15. The results of G u , G v , G p , G c + and G c − are shown in Fig. 7. In this figure, the input function
φ(x, y ), which is represented by a set of values at the sensor locations, as shown in the left column, is fed into the networks.
We find that the DeepONet predictions are in good agreement with the fields generated by NekTar simulation. The MSEs for
different state variables are on the order of O (10−6 − 10−5 ). On the other hand, for testing the DeepONet G φ , the inputs
c + and c − are provided. The prediction of φ is illustrated in Fig. 8.
The prediction errors for both = 13.4 and = 62.15 are given in Table 3. In addition to MSEs, we also compute
the relative L 2 -norm errors, namely:
V = V − V̂ 2 / V 2 . (4)
As shown, the relative errors of G φ, p ,c + ,c − are less than 1%, indicating that the predictions of these networks are very
accurate. We find that in the case of = 13.4, the relative L 2 errors of the velocity fields (u and v) are much larger
than the others. The reason is that the maximal velocity value is almost zero (O (10−4 )) in the case of = 13.4, thus
the relative error is very sensitive to a minor difference. In fact, the predictions of DeepONets G u and G v are very close to
the references, which can be seen from the small MSEs in the table. However, this is not the case for = 62.15 since
the magnitude of velocity is much larger, resulting in a larger MSE and a smaller relative error. To reduce the effect of this
multiscale nature, we apply the MAPE loss when training G u and G v . A comparison between the MSE and MAPE applied to
G u is demonstrated in Appendix A.
We should note that once the DeepONets have been well trained, the prediction is a simple network evaluation task that
can be performed very efficiently. In the 2D electroconvection problem, the training of a DeepONet requires approximately
2 hours. However, the well-trained DeepONets take less than 1 second (on any common CPU/GPU) to predict the 2D fields
when the input functions are provided. The speedup of DeepONets prediction versus the NekTar simulation for independent
conditions is about 1,000 folds.
In order to use the pre-trained DeepONets, the input function should be given. For example, the proper electric potential
φ(x, y ) should be provided to G u to obtain the u-velocity. However, this is not realistic in general. Therefore, we develop a
new framework called DeepM&Mnet, which allows us to infer the full fields of the coupled electroconvection problem when
only a few measurements for any of the state variables are available. In the context of DeepM&Mnet, a neural network is
used to approximate the solutions of the electroconvection, while the pre-trained DeepONets are applied as the constraints
of the solutions. In the following, we introduce two architectures of the proposed DeepM&Mnet in Section 4.1 and present
the evaluation results in Section 4.2 as well as the discussion in Section 4.3.
7
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 7. DeepONets (upper left) for 2D electroconvection: predictions of u, v, p, c + and c − (from top to bottom) using DeepONets G u , G v , G p , G c + and
G c − for a new case corresponding to = 62.15. Note that φ , the input of these networks, is represented by a set of values, and (u , v , p , c + , c − ) are the
outputs of five independent DeepONets. For comparison, we generate the reference fields by using NekTar simulations and plot the absolute errors (right
column).
Fig. 8. A DeepONet for 2D electroconvection: predictions of φ using the DeepONet G φ in the case of = 62.15. Note that c + and c − (represented by
discrete values) are the inputs of this network, and φ(x, y ) is the output. For comparison, we generate the reference field by using NekTar simulation and
plot the absolute error (right plot).
DeepONets are applied to deal with this problem. The pre-trained DeepONets are fixed (not trainable) and considered as
the constraints of the NN outputs, which can be seen in Fig. 1. The neural network, which takes (x, y ) as inputs and outputs
(φ, u , v , p , c + , c − ), is trained by minimizing the following loss function:
where λd , λo and λr are the weighting coefficients of the loss function; L2 (θ) = θ 22 is the L 2 regularization of the
trainable parameters θ , which can help avoid overfitting and stabilize the training process2 ; and
2
The influence of the L 2 regularization term for DeepM&Mnet is investigated in Appendix B.
8
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Table 3
DeepONets for 2D electroconvection: MSEs and relative L 2 -norm errors of the DeepONet predictions for two
testing examples. In the case of = 13.4, the relative L 2 -norm errors of the velocities (u and v) are much
larger than the others because the magnitude of the velocity in this case is almost zero (O (10−4 )).
Fig. 9. DeepM&Mnet for 2D electroconvection problem: schematic of the series architecture. Here, the neural network (NN), which is used to approximate
only one solution (i.e., φ ), is a fully-connected network with trainable parameters. The other variables (i.e., u , v , p , c + , c − ) are hidden outputs in this
framework. The pretrained DeepONets are fixed (not trainable) and they are considered as the constraints of the NN outputs. The input of each DeepONet
involves the input function (for the branch net) and the coordinates of the output function (for the trunk net). For simplicity, the coordinates feeding into
each DeepONet are not shown in the figure.
1
Nd
Ldata = ( V (xi , y i ) − V data (xi , y i )),
Nd
V ∈{φ,u , v , p ,c + ,c − } i =1
N op
1
Lop = ( V (xi , y i ) − V (xi , y i )),
N op
V ∈{φ,u , v , p ,c + ,c − } i =1
where Ldata is the data mismatch and Lop is the difference between the neural network outputs and the DeepONets outputs.
V can be any variables of the investigated solutions (φ, u , v , p , c + , c − ); V (xi , y i ) accounts for the output of the fully-
connected network, while V (xi , y i ) is the output of DeepONet. N d and N op denote the number of measurements for each
variable and the number of points for evaluating the operators, respectively. Here, we would like to add some comments
on DeepM&Mnet. First, it is not necessary to have measurements for every variable. For example, if measurements are only
available for φ , the DeepONets G u , G v , G p , G c + and G c − can provide constraints for the other variables and guide the NN
outputs to the correct solutions. Second, in the framework of parallel DeepM&Mnet (Fig. 1), we have not only the outputs of
the fully-connected network V ∈ {φ, u , v , p , c + , c − }, but also the outputs of DeepONets V ∈ {φ , u , v , p , c + , c − }. Ideally,
V and V should converge to the same values. However, there is bias between V and V due to the approximation error
of the pre-trained DeepONets and the optimization error of the neural network. Moreover, we note that the input of each
DeepONet involves the input function (for the branch net) and the coordinates of the output function (for the trunk net).
For simplicity, the coordinates feeding into each DeepONet are not shown in Fig. 1.
9
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 10. DeepM&Mnet for 2D electroconvection problem: results of parallel architecture. In this case, 20 measurements of each field inside the domain are
provided, which are marked with ∗. Left: results of the NN outputs (i.e., φ, u , v , p , c + , c − ), right: point-wise difference.
1
Nd
Ldata = (φ(xi , y i ) − φdata (xi , y i )),
Nd
i =1
N op
1
Lop = (φ(xi , y i ) − φ (xi , y i )).
N op
i =1
Different from the parallel architecture, in the series DeepM&Mnet, V only represents φ , while V ∈ {φ , u , v , p , c + , c − }.
This framework shows that given a few measurements of φ , the neural network can produce the full field of φ . All other
fields can be obtained by the pre-trained DeepONets inside the loop.
We first investigate the performance of DeepM&Mnet for the 2D electroconvection problem, where the data is generated
by NekTar with = 62.15. This corresponds to an input not seen by the pre-trained DeepONets, which are the building
blocks of DeepM&Mnet. First, 20 measurements of each field inside the domain and six boundary points are provided for
the parallel architecture. The neural network contains 6 hidden layers and 100 neurons per layer. The hyperbolic tangent
function is applied as the activation function. The neural network is trained by the Adam optimizer for 60,000 iterations
with a learning rate 5 × 10−4 . For the loss function, we set λd = 10, λo = 1 and λr = 10−4 . To evaluate the operator loss,
we randomly choose 1000 points inside the domain, namely N op = 1000. The resulting fields of one typical training process
are demonstrated in Fig. 10, where point-wise differences between the NN outputs and the NekTar simulations are also
given. The relative L 2 -norm errors of V are: φ = 0.47%, u = 2.83%, v = 1.95%, p = 1.66%, c + = 2.18%, c − = 2.53%. In
addition, we can obtain the solutions from the DeepONets outputs V , where the relative errors are: φ = 0.54%, u =
1.26%, v = 0.54%, p = 0.93%, c + = 0.67%, c − = 0.48%. The DeepONets outputs are different from the NN outputs due
to the approximation error of the pre-trained DeepONets and the optimization error of the neural network. However, we
cannot conclude which one (V or V ) is always better since such network training involves randomness. In our specific
case, the electric potential φ seems to be easier to approximate than the other fields (i.e., φ < u , v , p ,c + ,c − ). Hence, the
DeepONets, using φ as input, also have higher potential to provide the fields (u , v , p , c + , c − ) with better accuracy.
Next, we also investigate the effect of the number of sensors (N data ). Fig. 11 shows the convergence of the relative
L 2 -norm errors with respect to the number of measurements. For comparison, we perform data fitting using a standard
neural network for the same problem, in which the operator loss Lop is not considered, namely L = λd Ldata + λr L2 (θ).
We can see from the figure that without the DeepONet constraints, the neural network requires hundreds of sensors to
obtain good accuracy (less than 10%) for every state variable. However, only 10 measurements of each field are sufficient for
DeepM&Mnet to achieve the same accuracy. Note that the sensors are randomly distributed inside the domain. Therefore,
we perform 5 independent training processes with different sensor locations to reduce the effect of sensor selection, and
then compute the mean errors and standard deviations. We find that the deviations (shown in shaded region in the figure)
are mostly very small for all the fields inferred by DeepM&Mnet, indicating the robustness of the proposed algorithm.
Moreover, we also demonstrate the influence of the number of operator evaluation points, namely N op . We use 20
measurements for each condition. The relative L 2 -norm errors are demonstrated in Fig. 12, where we find that with more
evaluation points for DeepONets, the results are more accurate since there are more constraints on the solutions (NN
10
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 11. DeepM&Mnet for 2D electroconvection problem: relative L 2 -norm errors with respect to the number of measurements (N data ). The DeepM&Mnet
results are from the neural network (namely V ∈ {φ, u , v , p , c + , c − }) of the parallel architecture. For each setting, 5 independent training processes with
randomly-distributed sensors are applied. The mean errors and the standard deviations are computed.
Fig. 12. DeepM&Mnet for 2D electroconvection problem: relative L 2 -norm errors with respect to the number of operator evaluation points (N op ). The
DeepM&Mnet results are from the neural network (namely V ∈ {φ, u , v , p , c + , c − }) of the parallel architecture. For each setting, 5 independent training
processes with randomly-distributed sensors are applied. The mean errors and the standard deviations are computed.
outputs). Similarly, 5 independent training processes are applied and the mean errors and standard deviations are given in
the plots. A small value of N op results in a significantly large standard deviation since the DeepONets constraints are weak
and data overfitting phenomenon will occur.
For testing the series architecture of DeepM&Mnet, only the data of φ is given. An example of the resulting fields is shown
in Fig. 13. In this case, 20 measurements of φ are provided and N op = 1000; the neural network consists of 6 layers and
100 neurons per layer. The results of the other fields can be generated from the pre-trained DeepONets. As shown in the
figure, the outputs are very consistent with the simulation. The relative L 2 -norm errors are: φ = 1.45% for the NN output
and φ = 0.75%, u = 4.14%, v = 1.33%, p = 0.93%, c + = 1.31%, c − = 0.95% for the DeepONets outputs.
Comparison between two DeepM&Mnets. In the assessment mentioned above, we use sensors for all the state variables
in the parallel DeepM&Mnet. However, the data term in the parallel architecture’s loss function, namely Equ. (5), can be still
well-defined when there is only data of φ , which is consistent with the setup defined for series DeepM&Mnet. Herein, we
perform a testing example to investigate the performance of parallel and series DeepM&Mnets by providing them with the
same data measurements. First of all, a network size of 6 × 100 is employed for both architectures. The hyperparameters
are given as: λd = 10, λo = 1, λr = 10−4 and N op = 1000. The neural network is trained for 60,000 iterations with a learning
rate 5 × 10−4 . The convergence of the relative L 2 -norm errors with respect to the number of measurements is shown in
Fig. 14(a), where we can find that the series architecture performs better than the parallel one. Especially, when the data is
limited (N data < 20), the parallel DeepM&Mnet seems to be not well-defined and fails to infer the flow fields correctly. The
discrepancy between two architectures becomes larger when a smaller network (3 × 50) is used, as shown in Fig. 14(b). In
11
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. 13. DeepM&Mnet for 2D electroconvection problem: results of series architecture. In this case, 20 measurements of φ are provided, which are marked
with ∗. Left: results of the DeepONets outputs (i.e., φ , u , v , p , c + , c − ), right: point-wise difference.
Fig. 14. DeepM&Mnet for 2D electroconvection problem: comparison between parallel and series architectures when only data of φ is available. Two network
sizes are investigated: (a) 6 × 100 and (b) 3 × 50. The relative L 2 -norm errors of φ are plotted, where we can find that when there is only data of φ used
in DeepM&Mnets, the series architecture performs better than the parallel one in this electroconvection problem.
the parallel DeepM&Mnet, the neural network is approximating all the variables, while it is used only for φ in the series
setup. Therefore, the series DeepM&Mnet is less sensitive to the network size.
In summary, we can conclude that the series DeepM&Mnet outperforms the parallel DeepM&Mnet when only the data
of electric potential is available, although the parallel structure also provides acceptable results. However, we should note
that the parallel DeepM&Mnet can utilize more information from data. We recommend to use parallel DeepM&Mnet when
there are measurements of different variables. For better performance of the parallel DeepM&Mnet, a large neural network is
recommended. Moreover, the series DeepM&Mnet requires a closed loop to construct the operator loss, e.g. φ → (c + , c − ) →
φ , which is not always available when designing the DeepONets.
12
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
doing so, the DeepM&Mnets are able to infer the spatio-temporal flow fields using the data measurements collected at
different time steps.
5. Concluding remarks
In this paper, we first demonstrate the effectiveness of DeepONets for complex fluid systems using the multiphysics of
electroconvection as a benchmark. We train several DeepONets to predict one of the coupled fields from the rest of the
fields. In our example, we use 15 different conditions corresponding to a range of applied electric potentials to form the
training dataset. Upon training, the DeepONets can predict all fields very accurately and efficiently for any condition when
the input is given.
More importantly, in this paper, we propose a novel framework, the DeepM&Mnet, which allows to integrate the pre-
trained DeepONets and a few measurements from any of the fields, to produce the full fields of the coupled system. In
DeepM&Mnets, we use a neural network as the surrogate model of the multiphysics solutions, and use the pre-trained
DeepONets as the constraints for the solutions. For both parallel and series DeepM&Mnets, we find that only a few mea-
surements are sufficient to infer the full fields of the electroconvection, even if measurements are not available for all state
variables. The DeepM&Mnet, which can be considered as a simple data assimilation framework, is more flexible and efficient
than other conventional numerical methods in terms of dealing with such assimilation problem. Note that in order to use
the DeepM&Mnets, the building blocks - the DeepONets - are required to be pre-trained with labeled data. However, as
shown in the paper, preparing the training data is very flexible and the training can be done offline. Once the DeepONets
have been trained and embedded in the DeepM&Mnet, it is straightforward to predict the solutions of a complex multi-
physics and multiscale system when only a few measurements are available. More broadly, the results in this paper show
that the new framework can be used for any type of multiphysics and multiscale problems, for example in hypersonics with
finite chemistry, and we will report such results in future work.
Shengze Cai: Conceptualization, Methodology, Software, Validation, Writing – original draft, Writing – review & editing.
Zhicheng Wang: Conceptualization, Investigation, Methodology, Validation, Writing – original draft, Writing – review &
editing. Lu Lu: Conceptualization, Methodology, Software, Writing – review & editing. Tamer A. Zaki: Conceptualization,
13
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Table A.1
Prediction errors of G u with MSE loss and MAPE loss.
Gu = 13.4 = 62.15
MSE relative L 2 error MSE relative L 2 error
MSE loss 5.41 × 10−8 45.27% 1.10 × 10−5 0.88%
MAPE loss 1.92 × 10−9 8.54% 3.89 × 10−5 1.65%
Fig. A.1. Predictions of G u with different losses: (a) 2D fields and errors of u-velocity (the first row is G u with MSE loss and the second is G u with MAPE
loss); (b) 1D profiles at y = 0.5. The testing example shown here is the case when = 13.4.
Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing. George Em Karniadakis:
Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Writing – original
draft, Writing – review & editing.
The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.
Acknowledgements
The authors acknowledge support from DARPA/CompMods HR00112090062 and DOE/PhILMs DE-SC0019453.
As mentioned in Section 3, we apply the MAPE loss to the DeepONets training for G u , v due to the large range of the
velocity magnitude in the electroconvection problem. If we use a MSE loss for such multiscale prediction, the training loss
and the back-propagation process will be dominated by those velocities with larger magnitudes, as the small-scale velocities
contribute little to the NN optimization. Therefore, the MAPE loss is better for the DeepONets that are required to predict
multiscale values.
A comparison between the MSE and MAPE losses applied to G u is shown here. We learn from Fig. 5 that the u-velocity
is a multiscale state variable. In particular, the magnitude of the velocity is approximately zero (10−4 ) when is small
(e.g., = 13.4), while it is in the order of 100 when is large (e.g., = 62.15). The prediction errors of G u with
MSE loss and MAPE loss are given in Table A.1. We can see that the G u with MAPE loss performs much better than the
one with MSE loss for = 13.4. Although for = 62.15, for G u with MAPE loss we get larger errors, it is still very
accurate as the relative L 2 -norm error is only 1.65%. To demonstrate the superiority of the MAPE loss for G u , we show the
predicted fields with different losses in Fig. A.1. The DeepONet with MSE loss fails to predict the flow pattern at = 13.4
accurately, while the prediction of G u with MAPE loss is very consistent with the NekTar high-fidelity simulation.
The loss function of DeepM&Mnet training, given in Equ. (5), is composed of a data term, an operator term and a L 2
regularization term for the trainable parameters. We note that the DeepM&Mnet is essentially a function parameterized
by the weights and biases of neural network. The manifold of the parameter space contains a number of local minima,
14
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
Fig. B.2. DeepM&Mnet for 1D electroconvection problem: schematic of the parallel architecture. Here, the neural network (NN) is used to approximate the
1D coupled solutions (i.e., φ and c − ). The pretrained DeepONets G φ and G c − are fixed and form constraints for the NN outputs.
Fig. B.3. Parallel DeepM&Mnet for 1D electroconvection problem: results with different loss functions for = 77.6. First row: loss without L 2 regular-
ization; second row: loss with L 2 regularization. Left: training loss; middle: solution of φ( y ); right: solution of c − ( y ). Here, 5 measurements are used for
each coupled field (i.e., φ and c − ). Overfitting is observed in the case without L 2 regularization.
some of which will lead to data overfitting. Although the operator loss, involving the pre-trained DeepONets, is viewed as
a constraint of the approximating solution, it is not enough to guarantee a unique solution especially in the case when
we only have a very small number of data, where the input functions of DeepONets (denoted by V ) are not well defined
(namely the number of data is greatly smaller than the sensors required for DeepONets input). Therefore, we apply the
L 2 regularization, which is commonly used to avoid overfitting for data-driven methods and is easy to implement in the
DeepM&Mnet framework. In this section, we present an example showing the effect of the L 2 regularization.
We use the 1D electroconvection as the benchmark here. For 1D electroconvection, the electric potential φ and the
concentration c − are functions of y. We train two independent DeepONets G φ and G c − , where G φ takes c − ( y ) as input and
outputs φ( y ), while G c − performs oppositely. We generate 24 training trajectories for the 1D electroconvection at various
values of , namely = 5, 10, . . . , 120. For the individual DeepONet, two hidden layers and 100 neurons per layer are
applied for both branch net and trunk net. The input function is represented by 21 discrete values with equal space interval.
Upon training, the DeepONets G φ and G c − can predict one field from another very accurately.
Finally, we can construct the parallel DeepM&Mnet architecture, which is illustrated in Fig. B.2. A neural network (with
3 layers and 50 neurons per layer) is used to approximate the solutions of φ( y ) and c − ( y ). We apply the proposed
DeepM&Mnet to infer the profiles in the case of = 77.6. The results with different loss functions are demonstrated
in Fig. B.3. Five measurements, which are uniformly distributed along y direction, are used for each coupled field. We find
that in the case where the L 2 regularization is not considered, the loss jumps suddenly during training to a bad minimum
and the final solutions are overfitting to the measurements. Although the result is affected by the parameter initialization
and the training scheme, the similar phenomenon is observed frequently when we perform ensemble simulations (5 times
over 10 simulations). Intuitively, overfitting may be caused by large weights and biases in the neural network. Therefore, a
L 2 regularization penalizing the magnitudes of parameters can help to avoid this. As shown in Fig. B.3, the DeepM&Mnet
15
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
with L 2 regularization in the loss function is much more robust and attains consistent results with good accuracy for all ten
simulations with random initialization.
References
[1] S.L. Brunton, B.R. Noack, P. Koumoutsakos, Machine learning for fluid mechanics, Annu. Rev. Fluid Mech. 52 (2020) 477–508.
[2] K. Duraisamy, G. Iaccarino, H. Xiao, Turbulence modeling in the age of data, Annu. Rev. Fluid Mech. 51 (2019) 357–377.
[3] D. Fan, L. Yang, Z. Wang, M.S. Triantafyllou, G.E. Karniadakis, Reinforcement learning for bluff body active flow control in experiments and simulations,
Proc. Natl. Acad. Sci. 117 (2020) 26091–26098.
[4] M. Raissi, P. Perdikaris, G.E. Karniadakis, Physics-informed neural networks: a deep learning framework for solving forward and inverse problems
involving nonlinear partial differential equations, J. Comput. Phys. 378 (2019) 686–707.
[5] J. Sirignano, K. Spiliopoulos, DGM: a deep learning algorithm for solving partial differential equations, J. Comput. Phys. 375 (2018) 1339–1364.
[6] J. Han, A. Jentzen, E. Weinan, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. 115 (2018) 8505–8510.
[7] Z. Long, Y. Lu, B. Dong, PDE-Net 2.0: learning PDEs from data with a numeric-symbolic hybrid deep network, J. Comput. Phys. 399 (2019) 108925.
[8] M. Raissi, Z. Wang, M.S. Triantafyllou, G.E. Karniadakis, Deep learning of vortex-induced vibrations, J. Fluid Mech. 861 (2019) 119–137.
[9] M. Raissi, A. Yazdani, G.E. Karniadakis, Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations, Science 367 (2020)
1026–1030.
[10] L. Sun, H. Gao, S. Pan, J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Comput.
Methods Appl. Mech. Eng. 361 (2020) 112732.
[11] X. Jin, S. Cai, H. Li, G.E. Karniadakis, NSFnets (Navier-Stokes flow nets): physics-informed neural networks for the incompressible Navier-Stokes equa-
tions, J. Comput. Phys. 426 (2021) 109951.
[12] J. Ling, A. Kurzawski, J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, J. Fluid Mech. 807
(2016) 155–166.
[13] D. Zhang, L. Lu, L. Guo, G.E. Karniadakis, Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic
problems, J. Comput. Phys. 397 (2019) 108850.
[14] G. Pang, L. Lu, G.E. Karniadakis, fPINNs: fractional physics-informed neural networks, SIAM J. Sci. Comput. 41 (2019) A2603–A2626.
[15] Y. Chen, L. Lu, G.E. Karniadakis, L. Dal Negro, Physics-informed neural networks for inverse problems in nano-optics and metamaterials, Opt. Express
28 (2020) 11618–11633.
[16] A. Yazdani, L. Lu, M. Raissi, G.E. Karniadakis, Systems biology informed deep learning for inferring parameters and hidden dynamics, PLoS Comput.
Biol. 16 (2020) e1007575.
[17] S.H. Rudy, S.L. Brunton, J.L. Proctor, J.N. Kutz, Data-driven discovery of partial differential equations, Sci. Adv. 3 (2017) e1602614.
[18] M. Raissi, P. Perdikaris, G.E. Karniadakis, Physics informed deep learning (part II): data-driven discovery of nonlinear partial differential equations, arXiv
preprint arXiv:1711.10561, 2017.
[19] K. Hornik, M. Stinchcombe, H. White, et al., Multilayer feedforward networks are universal approximators, Neural Netw. 2 (1989) 359–366.
[20] T. Chen, H. Chen, Approximations of continuous functionals by neural networks with application to dynamic systems, IEEE Trans. Neural Netw. 4 (1993)
910–918.
[21] T. Chen, H. Chen, Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynam-
ical systems, IEEE Trans. Neural Netw. 6 (1995) 911–917.
[22] L. Lu, P. Jin, G. Pang, Z. Zhang, G.E. Karniadakis, Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators,
Nat. Mach. Intell. 3 (2021) 218–229.
[23] A. Mani, K.M. Wang, Electroconvection near electrochemical interfaces: experiments, modeling, and computation, Annu. Rev. Fluid Mech. 52 (2020)
509–529.
[24] I. Rubinstein, B. Zaltzman, Electro-osmotically induced convection at a permselective membrane, Phys. Rev. E 62 (2000) 2238.
[25] B. Zaltzman, I. Rubinstein, Electro-osmotic slip and electroconvective instability, J. Fluid Mech. 579 (2007) 173–226.
[26] S. Rubinstein, G. Manukyan, A. Staicu, I. Rubinstein, B. Zaltzman, Direct observation of a nonequilibrium electro-osmotic instability, Phys. Rev. Lett. 101
(2008) 236101.
[27] C.L. Druzgalski, M.B. Andersen, A. Mani, Direct numerical simulation of electroconvective instability and hydrodynamic chaos near an ion-selective
surface, Phys. Fluids 25 (2013) 110804.
[28] E. Karatay, C.L. Druzgalski, A. Mani, Simulation of chaotic electrokinetic transport: performance of commercial software versus custom-built direct
numerical simulation codes, J. Colloid Interface Sci. 446 (2015) 67–76.
[29] E. Karatay, M.B. Andersen, M. Wessling, A. Mani, Coupling between buoyancy forces and electroconvective instability near ion-selective surfaces, Phys.
Rev. Lett. 116 (2016) 194501.
[30] C.L. Druzgalski, A. Mani, Statistical analysis of electroconvection near an ion-selective membrane in the highly chaotic regime, Phys. Rev. Fluids 1 (2016)
073601.
[31] S.V. Pham, H. Kwon, B. Kim, J.K. White, G. Lim, J. Han, Helical vortex formation in three-dimensional electrochemical systems with ion-selective
membranes, Phys. Rev. E 93 (2016) 033114.
[32] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty
quantification without labeled data, J. Comput. Phys. 394 (2019) 56–81.
[33] S. Bhatnagar, Y. Afshar, S. Pan, K. Duraisamy, S. Kaushik, Prediction of aerodynamic flow fields using convolutional neural networks, Comput. Mech. 64
(2019) 525–545.
[34] S. Cai, S. Zhou, C. Xu, Q. Gao, Dense motion estimation of particle images via a convolutional neural network, Exp. Fluids 60 (2019) 73.
[35] J. Kim, C. Lee, Prediction of turbulent heat transfer using convolutional neural networks, J. Fluid Mech. 882 (2020).
[36] Q. Wang, Y. Hasegawa, T.A. Zaki, Spatial reconstruction of steady scalar sources from remote measurements in turbulent flow, J. Fluid Mech. 870 (2019)
316–352.
[37] M. Wang, Q. Wang, T.A. Zaki, Discrete adjoint of fractional-step incompressible Navier-Stokes solver in curvilinear coordinates and application to data
assimilation, J. Comput. Phys. 396 (2019) 427–450.
[38] M. Wang, T.A. Zaki, State estimation in turbulent channel flow from limited observations, arXiv preprint arXiv:2011.03711, 2020.
[39] S. Cai, E. Mémin, Y. Yang, C. Xu, Sea surface flow estimation via ensemble-based variational data assimilation, in: 2018 Annual American Control
Conference (ACC), 2018, pp. 3496–3501.
[40] V. Mons, Q. Wang, T.A. Zaki, Kriging-enhanced ensemble variational data assimilation for scalar-source identification in turbulent environments, J.
Comput. Phys. 398 (2019) 108856.
16
S. Cai, Z. Wang, L. Lu et al. Journal of Computational Physics 436 (2021) 110296
[41] D.A. Buchta, T.A. Zaki, Observation-infused simulations of high-speed boundary layer transition, arXiv preprint arXiv:2011.08265, 2020.
[42] G.E. Karniadakis, S. Sherwin, Spectral/hp Element Methods for Computational Fluid Dynamics, Oxford University Press, 2005.
[43] L. Lu, X. Meng, Z. Mao, G.E. Karniadakis, DeepXDE: a deep learning library for solving differential equations, SIAM Rev. 63 (2021) 208–228.
[44] C. Lin, Z. Li, L. Lu, S. Cai, M. Maxey, G.E. Karniadakis, Operator learning for predicting multiscale bubble growth dynamics, J. Chem. Phys. 154 (2021)
104118.
17