0% found this document useful (0 votes)
4 views

DeepM&Mnet solving multiphysics and multiscale problems [J. Comput. Phys.]-2

The document presents DeepM&Mnet, a novel deep learning framework designed to efficiently predict coupled flow and finite-rate chemistry in hypersonic flows, particularly downstream of a normal shock. Utilizing the DeepONet for approximating nonlinear operators, the framework demonstrates significant computational speedup and accuracy in predicting multiple field variables, even with sparse measurements. The study highlights the potential of DeepM&Mnet for addressing complex multiphysics and multiscale problems in engineering and physics.

Uploaded by

lzy0928
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

DeepM&Mnet solving multiphysics and multiscale problems [J. Comput. Phys.]-2

The document presents DeepM&Mnet, a novel deep learning framework designed to efficiently predict coupled flow and finite-rate chemistry in hypersonic flows, particularly downstream of a normal shock. Utilizing the DeepONet for approximating nonlinear operators, the framework demonstrates significant computational speedup and accuracy in predicting multiple field variables, even with sparse measurements. The study highlights the potential of DeepM&Mnet for addressing complex multiphysics and multiscale problems in engineering and physics.

Uploaded by

lzy0928
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Journal of Computational Physics 447 (2021) 110698

Contents lists available at ScienceDirect

Journal of Computational Physics


www.elsevier.com/locate/jcp

DeepM&Mnet for hypersonics: Predicting the coupled flow


and finite-rate chemistry behind a normal shock using
neural-network approximation of operators
Zhiping Mao a,1 , Lu Lu b , Olaf Marxen c , Tamer A. Zaki d,∗ ,
George Em Karniadakis e,∗
a
School of Mathematical Sciences, Xiamen University, Xiamen, 361005, China
b
Department of Chemical and Biomolecular Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA
c
Department of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
d
Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
e
Division of Applied Mathematics, Brown University, Providence, RI, 02912, USA

a r t i c l e i n f o a b s t r a c t

Article history: In high-speed flow past a normal shock, the fluid temperature rises rapidly triggering
Available online 13 September 2021 downstream chemical dissociation reactions. The chemical changes lead to appreciable
changes in fluid properties, and these coupled multiphysics and the resulting multiscale
Keywords:
dynamics are challenging to resolve numerically. Using conventional computational fluid
Deep learning
Operator approximation
dynamics (CFD) requires excessive computing cost. Here, we propose a totally new efficient
DeepONet approach, assuming that some sparse measurements of the state variables are available
Hypersonics that can be seamlessly integrated in the simulation algorithm. We employ a special
Chemically reacting flow neural network for approximating nonlinear operators, the DeepONet [23], which is used
Data assimilation to predict separately each individual field, given inputs from the rest of the fields of
the coupled multiphysics system. We demonstrate the effectiveness of DeepONet for a
benchmark hypersonic flow involving seven field variables. Specifically we predict five
species in the non-equilibrium chemistry downstream of a normal shock at high Mach
numbers as well as the velocity and temperature fields. We show that upon training,
DeepONets can be over five orders of magnitude faster than the CFD solver employed
to generate the training data and yield good accuracy for unseen Mach numbers within
the range of training. Outside this range, DeepONet can still predict accurately and fast
if a few sparse measurements are available. We then propose a composite supervised
neural network, DeepM&Mnet, that uses multiple pre-trained DeepONets as building blocks
and scattered measurements to infer the set of all seven fields in the entire domain of
interest. Two DeepM&Mnet architectures are tested, and we demonstrate the accuracy
and capacity for efficient data assimilation. DeepM&Mnet is simple and general: it can be
employed to construct complex multiphysics and multiscale models and assimilate sparse
measurements using pre-trained DeepONets in a “plug-and-play” mode.
© 2021 Elsevier Inc. All rights reserved.

* Corresponding author.
E-mail addresses: [email protected] (T.A. Zaki), [email protected] (G.E. Karniadakis).
1
This work started while Z. Mao was a postdoc at Brown University.

https://doi.org/10.1016/j.jcp.2021.110698
0021-9991/© 2021 Elsevier Inc. All rights reserved.
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

1. Introduction

1.1. Motivation

Simulating the high-speed flow field of a chemically reacting fluid is interesting, challenging and has important appli-
cations including hypersonic cruise flights and planetary re-entry [1]. In order to predict such a flow field, a multi-physics
and multi-scale approach is essential. The fluid dynamics part of this problem may already feature a large number of effects
as well as length and time scales when it features shocks, boundary layers, transition to turbulence to name but a few
[49,4,13,7]. In addition to fluid dynamics, also physical chemistry needs to be taken into account, as chemical reactions are
likely to occur in the flow, and these need to be modeled accurately [37,2,44]. It is particularly challenging if reactions and
flow effects take place on similar spatio-temporal scales because a simplified model for this type of flow field cannot be
easily derived. Instead, the full set of equations describing the physical as well as the chemical model must be solved.
One particularly relevant canonical problem is the high-speed flow downstream of a normal shock [28]. In this case,
the temperature of the fluid rises rapidly across the shock, which in turn triggers chemical dissociation reactions. As a
result, the flow field changes composition, which influences the energy balance as dissociation reactions are endothermic.
A change of composition directly influences several other aspects because it leads to a modification of viscosity as well as
heat conduction.
The level of fidelity required to model flows with high-temperature gas effects depends on the typical flow speed and
temperature [1,37,39]. The rate of chemical reactions is largely influenced by temperature (and to a lesser extent by pres-
sure), and convective mass transport is driven by flow speed. Comparing the typical time scales of these two effects yields
three different regimes. In the first, chemical reaction rates are much smaller than the rate of convective transport, and the
fluid composition in this regime is considered frozen. If chemical reaction rates are much larger than the rate of convective
mass transport, the flow is in chemical equilibrium; reaction rates are therefore assumed infinite and the gas composition
depends on local properties such as temperature and pressure (or density). The most interesting regime, which is considered
herein, is referred to as finite-rate chemistry, or non-equilibrium chemistry; it lies in between the other two regimes, when
the rates of reactions and transport are commensurate.
Gas composition strongly affects the relation between temperature and internal energy. For a calorically perfect gas,
internal energy is proportional to temperature. A gas composed of a single atomic species typically behaves calorically
perfect, for which the specific heat is constant. A gas composed of a molecular species will experience the excitation
of vibrational and electronic modes of the molecules. As a result, the internal energy becomes a non-linear function of
temperature and the specific heat will also vary with temperature. Such a gas is denoted as thermally perfect. Vibrational
and electronic excitation may not happen infinitely fast, in which case the process of thermal relaxation may need to be
taken into account. However, here we will only consider a gas in so-called local thermal equilibrium, i.e., it will be assumed
that the vibrational excitation only depends on local properties such as the temperature and not on its (time) history.
In many practical applications, the fluid is a mixture of atomic and diatomic species, such as in air. While air can certainly
be modeled as a calorically perfect gas at low temperatures where vibrational and electronic modes are not excited, at
high temperatures a thermally perfect gas model would be more appropriate. At even higher temperatures, the threshold
for the onset of chemical reactions may be reached and the corresponding changes of gas composition may need to be
included in the modeling approach. Such a high temperature may occur as a result of a shock wave, which creates an
almost instantaneous increase of temperature [21,19]. The time scale for a flow passing through a shock is extremely short,
owing to the very small thickness of a shock, which is on the order of the mean free path lengths of the species involved.
Chemical reactions are much slower than this fast time scale, and hence the composition of the gas mixture does not change
across the shock: the gas mixture is frozen. However, downstream of the shock, chemical reactions may set in as a result of
the higher temperature, and as the flow speed is reduced significantly, a region of flow in the finite-rate reaction regime is
expected, before the flow may reach an equilibrium state far downstream of the shock. As a result of finite-rate reactions,
the gas mixture will increasingly change its composition as molecules begin to dissociate, rendering the region immediately
downstream of the shock particularly interesting.
Compared to a calorically perfect gas, a gas that undergoes changes in composition is appreciably more complex to model
because transport equations must be solved for each species density, with a source term for the creation and destruction of
species by chemical reactions. These additional equations complicate the numerical treatment significantly. In particular, the
source term can cause numerical stiffness and hence become difficult to integrate. Several numerical methods for finite-rate
chemistry exist, mostly for time-dependent flows involving combustion [15,34,10,33,35], but they often rely on a low-Mach-
number formulation and are therefore not applicable to hypersonics. At high Mach number, evidence abounds regarding
the sensitivity of the flow to small distortions [40,16], and hence accurately capturing non-equilibrium chemistry becomes
extremely important. A growing number of methods now exist that account for non-equilibrium chemistry while solving
the compressible Navier-Stokes or Euler equations at hypersonic speeds [52,18,30,12,41,8,50,51]. However, the combination
of high Mach numbers and high temperatures not only renders these simulations challenging, but also taxes computational
resources heavily.

2
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 1. Left: Schematic of the non-equilibrium chemistry that takes place downstream of a high-Mach-number shock around a bluff body. The simulations are
performed in the frame of the body. Right: The computational domain for the generation of training data extends from x = 0 immediately downstream of
the shock to x = 0.02, discretized using a uniform grid with N X = 160 point. The cross-flow direction y ∈ [0, 0.05] is discretized using MY = 21 uniformly
distributed points. A Dirichlet condition is prescribed at the inlet, and periodicity is imposed in y direction.

1.2. Deep neural networks

In realistic hypersonic applications, flight data may be comprised of limited measurements, for example of temperature or
velocity and perhaps even in special cases of the composition of the species. To integrate such data with the computational
approach and in order to simulate the aforementioned multiscale & multiphysics problems efficiently, we abandon the
classical numerical methods and explore in the present work a new approach, namely, a deep neural network (DNN) based
approximation of all nonlinear operators.
The machine learning community has made tremendous strides in the past 15 years by capitalizing on the neural net-
work (NN) universal function approximation [9], and building a plethora of innovative networks with good generalization
properties for diverse applications. However, it has ignored an even more powerful theoretical result by Chen & Chen [5,6]
that states that neural networks can, in fact, approximate functionals and even nonlinear operators with arbitrarily good
accuracy. This is an important result with significant implications, especially for modeling and simulation of physical sys-
tems, requiring accurate regression and not only approximate classification tasks as in commercial applications. Preliminary
results in [11,23] have provided a glimpse of the potential breakthroughs in modeling complex engineering problems by
encoding different explicit and implicit operators using DNNs. For example, in [11] Ferrandis et al. represented a functional
predicting the dynamic motions of a destroyer battleship in extreme sea states, making predictions at a fraction of a second
in contrast to one week per simulation using OpenFoam CFD solver. Similarly, in [23], Lu et al. developed the Deep Operator
Network (DeepONet) to approximate integrals, ODEs, PDEs, and even fractional Laplacians by designing a new trunk-branch
NN that approximates linear and nonlinear operators, and generalizes well to unseen functions.
Traditional methods, especially high-order discretizations such as WENO [22] and spectral elements [17], can produce
very accurate solutions of multiphysics and multiscale (M&M) problems but they do not scale well in high dimensions
and large domains. Moreover, they cannot be easily combined with data [46,47,31,3] and are prohibitively expensive for
inverse problems. Real-world M&M problems are typically ill-posed with missing initial or boundary conditions and often
only partially known physics, e.g., reactive transport as in the present work. Physics-Informed Neural Networks (PINNs) can
tackle such problems given some extra (small) data anywhere in the domain, see [42,43,27,24]. PINNs are easy to implement
for multiphysics problems and particularly effective for inverse problems [36] but not as efficient or accurate for forward
multiscale problems. Here, we propose DeepONets to approximate functionals and nonlinear operators as building blocks
of a more general M&M framework that can be used to approximate different nonlinear operators for modeling M&M
problems. Unlike PINNs, we can train DeepONets offline and make predictions for new input functions online very fast. We
refer to this integrated framework that will use both data and DeepONets as DeepM&Mnet, and, in principle, it can be used
for any complex M&M problem in physics and engineering. Here we consider hypersonic flow downstream of a normal
shock, which involves the interaction of seven field variables (see Fig. 1). This formidable M&M challenge is an excellent
testbed to develop the DeepM&Mnet framework and to demonstrate its effectiveness.
The aim of this work is to develop a deep-learning framework using neural-network approximation for solving the cou-
pled flow and finite-rate chemistry in hypersonics flows. We first build connections between the flow and the chemical
species, namely, we build functionals approximated by neural networks between the flow and chemical species (i.e., taking
the flow as the functional of chemical species or taking chemical species as the functional of the flow) with DeepOnets,
which will serve as building blocks for the DeepM&Mnet. We then build parallel or series DeepM&Mnets, which take the
space variable x as input and field variables as outputs, and train these networks by using the predictions (which are re-
quired at each step of the training) of the pre-trained DeepOnets between the flow and chemical species. In a DeepM&Mnet,
we first train several DeepONets independently as the subcomponents, and then train one extra network, which shares a
similar idea of transfer learning [45]. In particular, in the present work, we claim the following contributions:

• We start with developing DeepONets for the M&M model, namely, the non-equilibrium chemistry that takes place
behind a normal shock at Mach numbers between 8 and 10. We infer the interactions of the flow and five chemical

3
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

species whose densities span 8 orders of magnitude downstream of the shock. Collectively, these dynamics establish the
gas composition and flow velocity, which are governed by the nonlinear Navier-Stokes equations and whose operators
will be learned by our DeepONets. Performing inference on a trained DeepONet allows evaluation of the solution in
100000x less time compared to a traditional CFD solver.
• Besides the prediction for the case when the Mach number is in the range [8, 10], we also test the case when the
input is out of the input space, i.e., the Mach number is out of the range [8, 10] (extrapolation). There are relatively
large deviations between the reference data and the predictions obtained by directly using the DeepONets. However, we
significantly improve the predictions and obtained good results by combining a few data, which may be available from
sensor data, and the pre-trained DeepONets by developing a supervised NN, which can be efficiently trained.
• As a preliminary step in developing the multi-physics integrated framework, we employ these pre-trained DeepONets as
building blocks to form different types of DeepM&Mnets. We first develop a parallel DeepM&Mnet architecture which,
similar to the aforementioned extrapolation algorithm, requires sensor data for all the variables. However, in practice,
for data assimilation we may not have access to information regarding the species densities, and may only have sparse
data for the flow. Therefore, we develop a series DeepM&Mnet architecture that assimilates only a few data for the
flow and predicts the entire state. Moreover, we examine the influence of the global mass conservation constraint and
demonstrate that not only does it stabilize the training process but it also improves prediction accuracy.

The rest of the paper is organized as follows: In the next section, we present the M&M fluid-mechanical model and
demonstrate how to generate the training data using a finite difference approach. We then develop in section 3 the Deep-
ONets, which will serve as building blocks for the DeepM&Mnets discussed in section 4. We conclude with a summary in
section 5. In the Appendix we present an alternative series type DeepM&Mnet.

2. Problem setup and data generation

In this section, we present in detail the governing equations that model the flow and describe how to obtain the data
for training and testing. The mathematical formulation for fluid motion is given in § 2.1, followed by a description of the
data generation process including details of the test case in section § 2.2.

2.1. Fluid-mechanical model and numerical method

The fluid mechanical model used here comprises the Navier-Stokes equations for a compressible fluid, and these equa-
tions are advanced in time until a steady state is reached. A detailed description of the model is given in [29]. Here, only
key elements of this model are described. The governing equations are the principles of conservation of mass, momentum
balance and energy conservation, and are formulated for a 5-species mixture of chemically reacting gases in two spatial
dimensions ( j = 1, 2):
∂ρ ∂  
+ ρ u j = 0, (2.1)
∂t ∂xj
∂ρs ∂  s 
+ ρ u j = ẇs , s = 1 . . . 5, (2.2)
∂t ∂xj
∂(u i ) ∂   ∂ σi j
+ ρ u i u j + p δi j = , i = 1, 2, (2.3)
∂t ∂xj ∂xj
∂E ∂   ∂q j ∂  
+ ( E + p) u j = − + u j σ jk . (2.4)
∂t ∂xj ∂xj ∂ xk
A schematic of the computational domain is shown in Fig. 1. While the domain is two dimensional, the flow of interest is
one-dimensional and only depends on the distance downstream of the shock. The equations are non-dimensionalized using
reference quantities described below. The mixture density is ρ and ρ s is the species density for species s = 1 . . . 5, and u 1 ,
u 2 are the velocity components in the streamwise x = x1 and normal y = x2 directions. In eqn. (2.2), ẇs is a source term
due to finite-rate reactions that lead to production or consumption of species. It is obtained from the MUTATION library,
here used in its version 2.1 [25,26,48], which has been coupled to the Navier-Stokes solver with the help of an interface
code layer.
Cases considered here are based on air with species N, O , N 2 , N O and O 2 . The state of the gas mixture is described by
the pressure p and temperature T , which are related to one another via the following equation of state, and assuming that
all species individually behave as ideal gases so that partial pressures sum up to the total pressure of the mixture:

p̃ = ρ̃ s R̃ s T̃ . (2.5)
s

In the equation, •˜ represents dimensional quantities, and R̃ s = R̃/ M̃ s , where R̃ is the universal gas constant and M̃ s is the
molar mass of species s. The temperature T is required to calculate the heat flux vector q j , which reads:

4
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

1 ∂T
qj = − k . (2.6)
Re ∞ P r∞ Ec ∞ ∂xj
The temperature does not belong to those quantities governed by transport equations (2.1) to (2.4) above, but it is linked to
the internal energy e which, together with kinetic energy, contribute to the total energy E in the following way:

1 1
E= eρ + ρ ui ui . (2.7)
Ec ∞ 2
Computation of the temperature T is based on knowledge of the internal energy e and the species densities ρ s . Specifically,
the gas consists of a mixture of perfect gases (atoms and linear molecules), namely dioxygen, dinitrogen, (atomic) oxygen,
(atomic) nitrogen, and Nitrogen oxide in local thermodynamic equilibrium. The internal energy of the gas e is then the sum-
mation of the internal energy of all species e s weighted by their mass fraction. The internal energy consists of formation
enthalpy, translational energy, and for molecules also of vibrational and rotational energy. Except for the formation enthalpy,
all these contributions depend on temperature, which hence allows calculation of the temperature based on (known) inter-
nal energy. The equation of state is assumed to hold for each species individually, and the resulting pressure can hence be
obtained by summing up partial pressures of all species (2.5). The computation of species source term for each species is
based on the law of mass action. An Arrhenius law is used to obtain the forward reaction rate in this law. Calculation of
backward reaction rates requires the equilibrium constant, which is obtained with the help of Gibbs free energy, which in
turn requires several (empirical) constants, which have been obtained from the literature. We note that Appendix A pro-
vides a more detailed description of the physico-chemical model in order to complete the description of the data-generation
model. The corresponding iterative solution procedure used to obtain the temperature is performed within the MUTATION
library.
The right-hand side of both the momentum equation (2.3) and the energy equation (2.4) contains the viscous stress
tensor σi j , which is given by:
 
μ ∂ ui ∂u j 2 ∂ uk
σi j = + − δi j . (2.8)
Re ∞ ∂xj ∂ xi 3 ∂ xk
Unlike in a calorically perfect gas, transport properties such as viscosity μ and thermal conductivity k are not simple func-
tions of (local) temperature, but also depend on gas composition. These quantities are also computed using the MUTATION
library. Due to the nature of the flow field considered here, these transport properties do not play a major role. Specifically,
while the transport coefficients change downstream of the shock and are accurately computed in the present simulations,
their variation is of lesser interest than the non-equilibrium chemical reactions that are our primary focus. Therefore, no
further description of how μ, k̃ are computed is given here since the details are part of the MUTATION library.
Non-dimensionalization is based on inlet conditions marked by ∞, resulting in the following non-dimensional quantities.
ρ̃∞ ã∞ L̃
The Reynolds number is defined as Re ∞ = μ̃∞
re f
, where L̃ re f is an arbitrary length scale and all other dimensional
quantities represent the pre-shock conditions, for example ã∞ is the upstream speed of sound. The Prandtl number is
μ̃∞ c̃ p,∞ Ũ ∞
P r∞ = . The Mach number is defined as Ma∞ = c̃ ∞
, where c̃ ∞ corresponds to the speed of sound. The reference
k̃∞
temperature used for non-dimensionalization is T̃ re f = (γ∞ − 1) T̃ ∞ , where γ∞ = c̃ p ,∞ /c̃ v ,∞ is the specific heat ratio and
c̃ p ,∞ , c v ,∞ are the specific heats at constant pressure and volume, respectively. The Eckert number Ec ∞ in eqns. (2.6) and
(2.7) assumes a value of 1, as the pre-shock gas is calorically perfect.
The governing equations are solved in the two-dimensional computational domain shown in Fig. 1. Since the flow only
depends on the streamwise coordinate, training data are only acquired along this dimension and all predictions by the
trained algorithms will also be one dimensional. The integration domain extends from x = 0 immediately downstream of
the shock to x = 0.02 discretized using N X = 160 equi-spaced grid points in the streamwise direction. In y direction,
MY = 21 grid points were used within y ∈ [0, 0.05]. At the inflow, a Dirichlet condition is prescribed, and corresponding
values are described in the next section. Periodic boundary conditions are imposed in y direction. Spatial derivatives in the
transport equations (2.1) to (2.4) are discretized using high-order compact finite differencing, and the solution is advanced
in time using a third-order Runge-Kutta method. The discretization is largely identical to that used in Ref. [32], on which the
present code is based. However, discretization at boundaries of the integration domain has been altered to accommodate
the boundary conditions applied here.

2.2. Data generation

We introduce in this subsection how to generate the training and testing datasets used to develop the DeepONets and
DeepM&Mnets. Only a brief description of the numerical method used for simulations is given here. Details of the method
used are given in [29]. We consider “case S” in reference [29], i.e., the flow field downstream of a normal shock wave,
which is set at the origin of the coordinate system. The parameters used are given in Tables 1-3. Specifically, the conditions
upstream of the shock are given in Table 1. The gas composition at the inflow boundary is given in Table 2. It is assumed that
within the short streamwise length of the shock, the composition χ s = ρ s /ρ does not change, and hence the composition

5
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Table 1
General parameters upstream of the shock.

Re ∞ P r∞ γ∞ M∞ T̃ ∞ [K] ρ̃∞ [kg m−3 ]


10 4
0.69 1.397 [8, 10] 350 0.3565 × 10−3

Table 2
Gas composition χ s at the inflow boundary.
N2 O2 N O NO
0.767082 0.232918 0 0 0

Table 3
Post shock conditions (x = 0).

T̃ [K] ρ̃ [kg m−3 ]


5918.87 0.255537 × 10−2

upstream and downstream remains the same. However, both temperature and density increase significantly, and the post-
shock conditions are used as inflow conditions for the simulations. The initial temperature and total density are presented
in Table 3; these have been calculated using Rankine–Hugoniot relations. The flow field is progressed in time until a steady
state is reached. The same case has been previously considered in [26]. Even though variations only take place along the
streamwise direction, the actual computations were performed in two dimensions as stated earlier.
We generated 400 trajectories for M ∞ ∈ [8, 10] and randomly selected 240 trajectories and 60 trajectories for training
and testing, respectively (see Fig. 2). We show the function spaces for the five chemical species and the velocity and the
temperature in Fig. 2. The range of the function space of ρ N O spans 8 orders of magnitude across a thin boundary layer,
which highlights the challenging of resolving this flow.

3. Developing DeepONets as building blocks

The main idea of the proposed framework is to map an input in the form of a function to an output in the form of
another function. This is accomplished by using the DeepOnet that expresses operator regression, which is different from a
standard neural network that regresses a function. In this section, we present the DeepONet architecture, which will serve
as building block for the DeepM&Mnet. We develop DeepONets for different fields to express the coupled dynamics between
the flow and the chemical species. With input from either the flow or the chemical species, the DeepONet can predict all
the remaining fields. These DeepONets will subsequently be used as building blocks for constructing the DeepM&Mnets.
Note that in Appendix B, we provide another example of three simple DeepONets for the chemical species alone, where
each uses ρ N 2 and ρ O 2 as inputs and predicts either ρ N , ρ O or ρ N O .

3.1. DeepONet architecture

A neural network can approximate a continuous function using a nonlinear basis that is computed on-the-fly based
on different activation functions in the form of sigmoids, tanh, or other non-polynomial activation functions [9]. A less
known result is that a neural network can also approximate nonlinear continuous operators [6]. The first applications of the
universal approximation theorem of the neural networks in learning nonlinear operators from data were presented by Lu
et al. who proposed a specific DeepONet architecture that can lead to accurate predictions for unseen data, i.e., beyond the
training data, see [23].
A DeepONet consists of two sub-networks, a branch net and a trunk net. The branch net is for encoding the input func-
tion at a fixed number of points xi , i = 1, . . . , m, while the trunk net is for encoding the locations of the output functions.
For all the DeepONets presented herein, the number of points for the branch net is 75 while the number of points for
the trunk net is 48. In [23], two different architectures of DeepONets were proposed, namely, stacked DeepONets and un-
stacked DeepONets; unstacked DeepONets seem to have better performance in terms of the generalization error, which is
measured by the difference in accuracy between predictions using training data and new testing data. Hence, in this work,
we will consider the unstacked DeepONet (Fig. 3). A DeepONet learns an operator G : u → G (u ), where the branch net
takes [u (x1 ), u (x2 ), . . . , u (xm )] as the input and outputs [b1 , b2 , . . . , b p ], and the trunk net takes y as the input and outputs
[t 1 , t 2 , . . . , t p ]. The final output is given by


p
G (u )( y ) = bk t k .
k =1

6
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 2. Data defining the function spaces for the chemical species, the velocity and the temperature. The range of the function space of ρN O spans 8 orders
of magnitude.

DeepONets are implemented in DeepXDE [24], a user-friendly Python library designed for scientific machine learning. For
the details, we refer to the reference [6] for the theoretical result and the reference [23] for the implementation. A recent
paper [20] has provided a theoretical analysis of all the major sources of errors in DeepOnet and presented a rigorous
proof that DeepOnet breaks the curse-of-dimensionality, in a sense that achieving higher approximation accuracy does not
require exponentially more training data. From the practical computational standpoint, the important point to realize is that
DeepOnet is trained offline and can make predictions online without further training as long as the input functions are
drawn from a pre-defined input space. For new inputs outside this space, we show in the present paper how we can still
assimilate very fast new data using the pre-trained DeepOnet as a transfer learning method.

3.2. DeepONets for the coupled dynamics between the flow and the chemical species

To demonstrate the simplicity and effectiveness of DeepONet, in Appendix B we develop three DeepONets to predict the
densities ρ N , ρ O and ρ N O using the densities ρ N 2 and ρ O 2 as the inputs of the branch nets. Here, however, we focus on the
coupled dynamics of the flow an chemical fields. For this purpose, we develop two DeepONets, which will become building
blocks for constructing the DeepM&Mnets for the entire configuration. The two DeepONets have the following functionality:

(i) G U , T : ρ N 2 , O 2 , N , O , N O → [U , T ] uses the densities of the chemical species as the input of the branch net to predict the
velocity U and temperature T (Fig. 4 left).
(ii) G ρN , O ,N , O ,N O : [U , T ] → ρ N2, O 2, N , O , N O uses the velocity U and the pressure T as the input of the branch net to predict
2 2
all the densities of the five chemical species (Fig. 4 right).

We train all the DeepONets independently. Here, we use the logarithms of the original data for all densities while we
use the original data for the velocity and the temperature. The parameters for the neural network are as follows:

7
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

p
Fig. 3. Schematic of DeepONet, which is the building block of the DeepM&Mnet. DeepONet learns the operator G : (u , y ) → G (u )( y ) = k=1 bk tk . Here, u (x)
is the input of the branch net observed at m points (sensors), and b i , i = 1, 2, . . . , p are the output of the branch net; y is the input of the trunk net, and
t i , i = 1, 2, . . . , p is the output of the trunk net.

Fig. 4. DeepONets for the coupled dynamics between the flow and the chemical species. (a): Schematic of using the densities of the five chemical species
to predict the velocity and temperature, i.e., the schematic of the DeepONets: G U , T : ρ N 2 , O 2 , N , O , N O → [U , T ]. (b): Schematic of using the velocity and
temperature to predict densities of the five chemical species, i.e., the DeepONets: G ρN , O ,N , O ,N O : [U , T ] → ρ N2, O 2, N , O , N O .
2 2

• Hidden layers for both branch and trunk nets: 4 × 100.


• Activation function: adaptive ReLU;
• Learning rate: 6 × 10−4 ;
• Epochs: 120000.

We have also performed other sensitivity tests with different training parameters. For instance, we used the “tanh” or
adaptive “tanh” activation function, varied the size of the network, the learning rate and the number of epochs to train
the DeepONets; the results, not shown here, are similar to the ones shown below. At the present time, this is still very
empirical but in the future we can anticipate that proper meta-learning techniques can be developed to determine good
sets of hyperparameters for specific classes of problems.
We show the comparisons between the predictions of the DeepONets G U , T and the independent reference data, and the
corresponding losses of the training and testing for U , T / T ∞ in Fig. 5. We point out here that the results are randomly
chosen from 10 runs corresponding to different initializations. The differences of the results among the 10 different runs
are small, which shows that the DeepOnet solutions are stable in this case. We also show the training losses for 10 runs
with random initializations for the velocity and temperature in the lower two plots of Fig. 5. The agreement with the
reference data demonstrates that DeepONet is able to predict the velocity and temperature based on the knowledge of the
chemical species alone, and the testing errors against independent data are commensurate with the training errors. We
also show the comparisons between the predictions of the DeepONets G ρN , O ,N , O ,N O and the independent reference data
2 2
as well as the corresponding losses of the training and testing for ρ N 2 in Fig. 6. Again, we observe that the predictions
are in excellent agreement with the reference independent data, which were not part of the training. Note that some of
the densities vary by orders of magnitude downstream of the shock, and DeepONet predicts these variations accurately.
An important consideration for achieving this level of accuracy is that the data used for all densities are the logarithms of
the original data. These results demonstrate that we have successfully trained the DeepONets G U , T and G ρN , O ,N , O ,N O . Once
2 2
trained, the computational cost of DeepONet for predictions of the independent data is much less than CFD simulations.
With the successfully trained DeepONets G U , T and G ρN , O ,N , O ,N O , we can subsequently predict the interplay of the flow and
2 2
the chemical species using DeepM&Mnets (§4) and using it for data assimilation. However, an important pre-requisite for

8
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 5. Upper: Predictions of the velocity U (left) and the temperature T / T ∞ (right) using the DeepONets G U , T . Middle: The training losses and testing
losses for the velocity U (left) and the temperature T / T ∞ (right). Lower: The training losses for 10 runs with random initializations for the velocity U
(left) and temperature T / T ∞ (right). (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)

a robust assimilation framework is the ability to handle new data outside the training range, or to extrapolate. We will
therefore first show the prediction when inputs are outside the function space of the training data for the DeepONets.

3.3. Extrapolation: predicting inputs outside the function space of the training data

The DeepONet predictions shown previously are results for Mach numbers in the input training range, i.e., the testing
was performed with independent data but with Mach number lies in the interval [8, 10], or interpolation. Now, we would
also like to obtain predictions with the trained DeepONets for Mach numbers that do not belong to the input function space,
namely extrapolation. In particular, we want to use the pre-trained DeepONet G U , T to predict the velocity and temperature
as well as G ρN , O ,N , O ,N O to predict the densities of the chemical species for M ∞ ∈
/ [8, 10]. For brevity, we only consider the
2 2
DeepONet G ρN , O ,N , O ,N O and the results can be interpreted as representative of the performance of the method.
2 2
We show the DeepONet predictions for ρ N and ρ O 2 with different values of Mach number and different number of
training trajectories in Fig. 7. We observe that the predictions do not match the reference data. More precisely, the pre-
dictions have a shift from the reference solutions. However, the shapes of the predicted solutions are very similar to the

9
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 6. We show in the first five plots the predictions of densities of all the five chemical species, i.e., ρ N 2 , ρ O 2 , ρ N , ρ O , ρ N O , using the DeepONets
G ρN , O ,N , O ,N O . The training losses and testing losses for the density ρ N 2 are shown in the last plot. The training and testing losses are similar for other
2 2
densities.

Fig. 7. Extrapolation for the chemical densities with different values of Mach number and different numbers (240 and 380) of training trajectories. The
results obtained by using a larger number of training trajectories are closer to the reference solutions compared to the results obtained with a small
number of trajectories.

reference solutions. Moreover, we observe that the results obtained by using a larger number of training trajectories are
closer to the reference solutions, which implies that the more accurately we represent the input space, the more accurate
the extrapolation is. Nonetheless, it is desirable to extrapolate without requiring substantial additional training and to be
able to do so robustly ‘on the fly’.

10
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 8. Schematic of the architecture for the extrapolation with data for ρN . The output of the DeepONet G ρN , i.e., ρN∗ , is used as the input of the neural
network “NN”.

Fig. 9. Extrapolations of ρ N for Mach = 7 and Mach = 11 without and with data. For each Mach number, the number of data is increased from 3 to 4 then
5 points. We obtain good results with just a few data for the extrapolation. The red dot-dash lines are the outputs of the DeepONet G ρN without data
while the black dash lines are the predictions with data.

In the spirit of data assimilation, which we will demonstrate using DeepM&MNets (§4), here too we show how extrapo-
lation in the context of DeepONet can be improved when a few data are available for the output variables, for example from
sensors. We propose a simple neural network that takes the DeepONet prediction as its input, and define its loss function
to be the mean square error between the scarce available data and its output. We show a schematic of the proposed neural
network for ρ N in Fig. 8, where we use the output of the DeepONet G ρN , i.e., ρ N∗ , as the input of the neural network to be
trained. The loss function is given by

1 
nD
2
L= ρN (xi ) − ρNi ,data ,
nD
i =1

where ρ N is the output of the neural network to be trained, and ρ Ni ,data , i = 1, 2, . . . , n D are given data. In this configuration,
it is important that the additional network does not require exhaustive training, and is simply required to improve the
prediction of DeepONet to match the sensor data.
For this add-on network, we use the following parameters:

• Hidden layers for the neural network “NN”: 6 × 40.


• Activation function: tanh;
• Learning rate: 8 × 10−4 ;
• Epochs: 20000.

We note that the training for the above neural network is very fast (usually takes one or two minutes). The results for the
densities of N and O 2 with different number of data are shown in Fig. 9 and 10, respectively. We observe that even with
three data points we obtain satisfactory predictions. When we use more data, we obtain very accurate solutions compared
to the reference data. We do not show the results of ρ N 2 , ρ O , ρ N O since they are similar as those for ρ O 2 and ρ N .

11
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 10. Extrapolation of ρ O 2 for Mach = 7.5 and Mach = 10.5 without and with data. For each Mach number, the number of data is increased from 3 to
4 then 5 points. We obtain good results with a few data for the extrapolation. The red dot-dash lines are the outputs of the DeepONet G ρ O without data
2
while the black dash lines are the predictions with data.

The results from DeepONet demonstrate that this architecture can efficiently represent the operators for the behavior
of non-equilibrium and the velocity and temperature downstream of the shock. Within the training range, the network
performs exceptionally well in predicting these fields. For extrapolation, in addition to knowledge of the input function we
assume knowledge of only few data points of the output, which may be available from sensor data. These few sensor data
enable accurate predictions with further efficient training. The above elements are the building blocks for our design of
DeepM&Mets which couple the pre-trained DeepONets, and whose architectures are targeting data assimilation where we
do not have knowledge of the full input function but rather just a few data points.

4. DeepM&Mnet framework: architectures and results

In this section, we propose the DeepM&Mnet framework for hypersonics multi-physics and multiscale problems, by cou-
pling the pre-trained DeepONets G U , T and G ρN , O ,N , O ,N O developed in subsection 3.2. Unlike the building blocks DeepONets
2 2
that require input functions and make predictions, we now relax this requirement. We only assume that we have some
sensor data for the inputs. For all the tests performed in this section, we randomly select a value of the Mach number in
the interval [8, 10].

4.1. Parallel DeepM&Mnet

We begin by proposing the parallel DeepM&Mnet. Assume that we have some data for all the variables, i.e., ρkj,data , k =
j j
N 2 , O 2 , N , O , N O and U data , T data , j = 1, 2, . . . , n D , where n D is the number of data. We design the parallel DeepM&Mnet
as follows:

1. We construct a neural network “NN” (to be trained) that takes x as the input, and the velocity and temperature U , T
as well as densities of all species, i.e., ρ N 2 , ρ O 2 , ρ N , ρ O , ρ N O , as the outputs.
2. We then feed U , T as the input of the pre-trained DeepONets G ρN ,ρ O ,ρN ,ρ O ,ρN O and output the densities of the five
species ρ N∗ 2 , ρ O ∗ , ρ ∗ , ρ ∗ , ρ ∗ , while we feed ρ , ρ , ρ , ρ , ρ 2 as 2
the inputs to the pre-trained DeepONets G U , T
2 N O NO N2 O2 N O NO
and output the velocity U ∗ and temperature T ∗ .
3. Then, we define the total loss by combining the mean square errors between ρ N 2 , ρ O 2 , ρ N , ρ O , ρ N O , U , T and ρ N∗ 2 ,
ρ O∗ 2 , ρN∗ , ρ O∗ , ρN∗ O , U ∗ , T ∗ , and the mean square errors between the data and the outputs of the neural network “NN”.

We show the schematic of the parallel DeepM&Mnet in Fig. 11. To stabilize the training process, we add a L 2 regulariza-
tion of the training parameters (i.e., the weights and bias of the NN) in the loss function. Moreover, to obtain a more robust
training process we also add one more term related to the global mass conservation, i.e., the condition at steady state:

12
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 11. Schematic of the parallel DeepM&Mnet. Here we assume we have some data for all seven state variables.

∂(ρ U )
= 0,
∂x
or equivalently,

ρ U (x) ≡ Const ,
where ρ is the total density. Furthermore, we assign each term of the loss function a weight. Therefore, the total loss is
given as follows:
1 1 1
L = ωD Ldata + ω O Lop + ω R Lreg + ωG LG , (4.1)
nd nO nG
where Lreg = θ22 , θ is the set of training parameters of the neural network to be trained, and


nD  j

nD
j

nD
j
Ldata = ρk,data − ρk (x j )2 + U data − U (x j )2 +  T data − T (x j )2 ,
j =1 k∈{ N 2 , O 2 , N , O , N O } j =1 j =1


nO  
nO 
nO
Lop = ρk∗ (x j ) − ρk (x j )2 + U ∗ (x j ) − U (x j )2 +  T ∗ (x j ) − T (x j )2 ,
j =1 k∈{ N 2 , O 2 , N , O , N O } j =1 j =1


nG
LG = ρ U (x j ) − Const 2 ,
j =1

where ρ = ρ N 2 + ρ O 2 + ρ N + ρ O + ρ N O , n D is the number of data, n O is the number of points for the variables, and n G
is the number of points for the global conservation. Here we take the average of ρ U of the CFD data as the data for the
constant “Const”.
An important consideration is the operator loss, Lop , which is a surrogate for the governing equations in the context of
PINNs. Note that using DeepONets and Lop does not preclude including the equations as additional constraints; in fact we
have added mass conservation in the loss. However, using DeepONets and their loss Lop is an efficient way of incorporating
the physics because the DeepONets are pre-trained.
Let ω D = 1.0, ω O = 1.0. For each variable, we use 5 data and the following hyperparameters to train the parallel
DeepM&Mnet:

• Hidden layers for the neural network “NN”: 6 × 50.


• Activation function: tanh;
• Learning rate: 5 × 10−4 ;
• Epochs: 300000.

For the first example, we do not use the regularization term (ω R = 0). In Fig. 12, we report the results with and without
the global conservation. Observe that we obtain very accurate predictions. We note that the mild oscillations in the NN
outputs ρ N 2 , ρ O 2 , ρ N , ρ O , ρ N O and U , T . In addition, the NN outputs are not very accurate in some cases. However, the

13
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 12. (a)-(g): Predictions of all the variables (densities of the chemical species, the velocity and the temperature) using the parallel DeepM&Mnet with
(ωG = 1) or without (ωG = 0) global conservation. ω R = 0. (h): Loss versus epoch.

Table 4
The values of  Re f − P red2 for the parallel DeepM&Mnet using 5 data for each variable, where Re f denotes the
reference data and P red denotes the prediction of the DeepONet output in the DeepM&Mnet architecture. ωG is
the weight for the global conservation term in the loss function. ω R is the weight for the regularization term in
the loss function.

ρN2 ρO 2 ρN ρO ρN O U T
ωG = 0, ω R =0 1.84e-04 7.99e-04 3.91e-04 5.06e-07 7.51e-06 2.45e-06 4.49e-06
ωG = 1, ω R =0 4.33e-04 1.36e-05 1.24e-04 5.72e-07 8.27e-07 1.46e-06 2.04e-06
ωG = 0, ω R = 10−4 9.13e-03 3.25e-02 1.71e-03 4.21e-05 1.69e-04 3.57e-05 5.34e-05
ωG = 1, ω R = 10−4 1.14e-03 8.04e-03 3.08e-04 1.06e-05 4.99e-05 5.80e-06 6.83e-06

DeepONet outputs, ρ N∗ 2 , ρ O
∗ , ρ∗ , ρ∗ , ρ∗
2 N O

N O and U , T
∗ are always smooth and accurate, even though we have not used

any regularization. The pre-trained DeepONets that encode the physics therefore deliver an effective regularization in these
tests.
We report the mean square errors between the predictions of the DeepONet output and the reference data with and
without L 2 regularization or global conservation in Table 4. Without L 2 regularization, we do not have an appreciably im-
proved accuracy for the results obtained with the global conservation compared with the ones without global conservation.
On the other hand, in the case of using the L 2 regularization, we obtain significant improvement of the accuracy by en-
forcing global conservation for all variables. The results may suggest that regularization is not needed, although it is in fact
important as we explain below.

14
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 13. Schematic of the series DeepM&Mnet architecture that employs the data of the velocity and the temperature.

It is important to recognize the performance of the DeepM&Mnet. Without complete knowledge of any of the fields,
and only starting with a few data for the inputs, we are able to efficiently predict the entire coupled non-equilibrium
chemistry and flow. The predicted solution, reliant on pre-trained DeepONet, respects the operators that govern the physics
of the problem. Compared to conventional data assimilation approaches that attempt to construct the full fields from limited
data, our approach is extremely efficient. But similar to any solution of inverse problem where the physics are nonlinear,
convergence is not guaranteed. For this reason, the regularization term may be needed for the NN in instances where
convergence is difficult to achieve. It is also important to ensure that the output of the NN does not suffer from overfitting
since it serves as input to all the DeepONets. An example that demonstrates how regularization can be beneficial is provided
in Appendix C.

4.2. Series DeepM&Mnets

In practice, sensor data are not generally available for all the field variables. In the present configuration, it is for example
less likely that we will have data for the densities of all the chemical species or perhaps for any of them. Although it may
be seemed reasonable to expect access to some sparse data for the velocity and temperature. Therefore, we now consider
a different architecture of DeepM&Mnet, which we term series architecture that only requires the data of the velocity and
j j
temperature, i.e., U data , T data , j = 1, 2, . . . , n D . We design the series DeepM&Mnet as follows:

1. We construct a neural network “NN” (to be trained) that takes x as the input, and the velocity and temperature U , T
as the outputs, which would be fed as the input to the trained DeepONets G ρN ,ρ O ,ρN ,ρ O ,ρN O .
2 2
2. We now feed the outputs of the DeepONets G ρN ,ρ O ,ρN ,ρ O ,ρN O , i.e., the densities of the chemical species, i.e.,
2 2
∗ ∗ ∗ ∗ ∗
ρN 2 , ρ O 2 , ρN , ρ O , ρN O , as the inputs to the trained DeepONets G U ,T and output the velocity U ∗ and temperature T ∗ .
3. Then, we define the total loss as the sum of the mean square error between the outputs of the neural network “NN”,
i.e., U , T and the outputs of the DeepONets G U , T , i.e., U ∗ , T ∗ , and the loss of the measurements, i.e., the mean square
error between the data and the outputs of the neural network “NN”, i.e., U , T .

We show the schematic of the series DeepM&Mnet in Fig. 13. The loss function shares the same formula as equation
(4.1) with Ldata , Lop , and LG are given by


nd
j

nd
j
Ldata = U data − U (x j )2 +  T data − T (x j )2 ,
j =1 j =1
nop nop
 
Lop = U ∗ (x j ) − U (x j )2 +  T ∗ (x j ) − T (x j )2 ,
j =1 j =1


nG
LG = ρ ∗ U ∗ (x j ) − Const .2 ,
j =1

where ρ ∗ = ρ N∗ 2 + ρ O∗ 2 + ρ N∗ + ρ O∗ + ρ N∗ O . In addition to the capacity of this configuration to predict all seven fields variables
from limited data of just two inputs, the architecture also has the benefit that the inputs to G U and G T are ‘naturally’
regularized because they are the outputs of the upstream DeepONets.

15
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 14. (a)-(g): Predictions of all the variables (densities of the chemical species, the velocity and the temperature) using the series DeepM&Mnet with
(ωG = 1) or without (ωG = 0) global mass conservation constraint. ω R = 0. (h): Loss versus epoch.

We use 5 data points for both the velocity and temperature, and train the network with the following parameters:

• Hidden layers for the neural network “NN”: 6 × 50.


• Activation function: tanh;
• Learning rate: 6 × 10−4 ;
• Epochs: 400000.

The results for the series DeepM&Mnet are shown in Fig. 14. Again, the predictions are in good agreement with the reference
solutions. Also, the DeepONet outputs are always smooth and accurate, consistent with the notion that the pre-trained
DeepONets regularize the predictions because the physics modeled by the operators are encoded during their training.
Similar to the parallel DeepM&Mnet, we also show the mean square errors between the predictions of the DeepONet output
and the reference data with and without L 2 regularization or the global mass conservation for the series DeepM&Mnet in
Table 5. We observe again that in the case of with the L 2 regularization, we obtain much more accurate predictions when
using in addition the global conservation law.
Furthermore, we observe that even without the regularization, the training processes for the parallel and series
DeepM&Mnets proposed in this subsection are stable. In Appendix C, we illustrate the importance of regularization for
another series DeepM&Mnet, which does not employ the data of the velocity and temperature but the data of the densities
of the chemical species.

16
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Table 5
The values of  Re f − P red2 for the parallel DeepM&Mnet using 5 data for each variable, where Re f means the
reference solutions and P red means the prediction of the DeepONet output in the DeepM&Mnet architecture. ωG
is the weight for the global conservation term in the loss function. ω R is the weight for the regularization term
in the loss function.

ρN2 ρO 2 ρN ρO ρN O U T
ωG = 0, ω R =0 6.31e-04 8.75e-05 1.77e-04 2.11e-06 1.24e-06 3.50e-07 1.18e-06
ωG = 1, ω R =0 8.02e-04 3.92e-05 5.33e-04 1.98e-06 2.45e-06 3.29e-07 8.85e-07
ωG = 0, ω R = 10−5 3.42e-03 1.32e-03 7.16e-04 2.99e-05 7.89e-06 2.42e-07 1.46e-06
ωG = 1, ω R = 10−5 2.85e-04 1.14e-04 3.18e-05 1.25e-05 1.21e-06 1.63e-06 4.78e-06

5. Conclusion

The simulation of hypersonic flow is a challenging multi-scale & multi-physics (M&M) problem, due to the combination
of high Mach, the interaction with a shock leads to excessively high temperatures that can cause dissociation of the gas.
When the reaction rates are commensurate with the rates associated with the flow itself, the dynamics of the dissociation
chemistry and the flow are coupled and must be solved together. These coupled physics lead to changes in the chemical
composition of the gas, with densities spanning orders of magnitudes within very small regions, or steep boundary layers,
behind the shock.
We develop a new framework called DeepM&Mnet to address M&M problems in general and the hypersonics problem
in particular. We first presented the DeepONets, which serve as the building blocks of the DeepM&Mnet for the model of
the non-equilibrium chemistry that takes place behind a normal shock at high Mach numbers. We simulate the interplay
of the flow velocity and temperature as well as the five chemical species whose densities span eight orders of magnitude
downstream of the shock. Moreover, we test the case when the input is not within the input space, i.e., the Mach number
is out of the training range [8, 10]. In this case, we cannot obtain accurate predictions, however, we resolve this issue by
combining a few data and the pre-trained DeepONets with a simple supervised network.
To employ the pre-trained DeepONets as building blocks to form the DeepM&Mnet, we develop two architectures,
namely a parallel and a series DeepM&Mnet. The parallel variant requires some data for all the variables while the se-
ries DeepM&Mnet only requires some data for the velocity and the temperature. Moreover, we show that the framework
can accommodate enforcing additional constraints such as global mass conservation, which, in addition to accuracy, is also
stabilizing.
We demonstrate in this work that DeepM&Mnet may provide a versatile new approach in computational science and
engineering for complex M&M problems. The codes are compact and easy to adopt in diverse domains as well as maintain
or enhance further. The value of DeepM&Mnet is best demonstrated in the context of assimilating scarce data from sensors
in complex physics. This problem of data assimilation has hindered simulation science for decades because it relies on
repeated, or iterative, procedure with solutions of nonlinear coupled equations in the loop and knowledge from previous
assimilation tasks are not effectively exploited. In the context of DeepM&Mnet, these previous solutions are encoded in
the pre-training of DeepONet, and DeepM&Mnets deliver a seamless integration of models and data all incorporated into
a loss function. We develop DeepM&M work specifically to address the data poor regimes. We have removed the tyranny
of mesh generation, and with offline training strategies based on single field representations by the DeepONet, the trained
DeepM&Mnet can produce solutions in a fraction of a second. This totally eliminates the need for reduced order modeling
that has been rather ineffective for nonlinear complex problems.

CRediT authorship contribution statement

Zhiping Mao: Conceptualization, Methodology, Investigation, Coding, Writing - original draft, Writing - review & editing,
Visualization.
Lu Lu: Conceptualization, Methodology, Investigation, Writing - original draft, Writing - review & editing.
Olaf Marxen: Providing DNS data, Writing - original draft, Writing - review & editing.
Tamer A. Zaki: Conceptualization, Methodology, Investigation, Writing - original draft, Writing - review & editing, Super-
vision, Project administration, Funding acquisition.
George Em Karniadakis: Conceptualization, Methodology, Investigation, Writing - original draft, Writing - review & edit-
ing, Supervision, Project administration, Funding acquisition.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.

Acknowledgement

The authors acknowledge support from DARPA/CompMods HR00112090062.

17
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Appendix A. Physico-chemical model of the simulation code

This appendix describes the details of the physico-chemical model implemented in the simulation code and hence com-
pletes the description of the underlying physical model. It is a slightly shortened version of the description provided in [29].
The physico-chemical model serves to relate the following quantities and in this way closes the set of governing equations:

T = T̃ (ẽ , ρ̃ s )/ (γ − 1) T̃ ∞ , (A.1)

p = p̃ ( T̃ , ρ̃ , χ s )/ 2
ρ̃∞ c̃∞ , (A.2)
s
μ = μ̃( T̃ , p̃ , χ̃ )/μ̃∞ , (A.3)
s
k = k̃( T̃ , p̃ , χ̃ , ẽ )/k̃∞ , (A.4)
ẇ s = M̃ s ˜ s ( T̃ , ρ̃ , p̃ , χ̃ s )/ ρ̃∞ã∞ / L̃ re f . (A.5)

The species mass fraction χ s is defined below.

A.1. Internal energy, temperature and species enthalpy

Internal energy e (see equation (2.7)) is related to the temperature as well as the gas composition. Hence, knowledge of
both the energy as well as species densities ρ̃ s , which are advanced in time using transport equations (2.1) to (2.4), allows
to evaluate the temperature T̃ of the gas using an iterative process. In turn, this temperature can be used to obtain viscosity
μ̃ as well as heat transfer coefficient k̃. All corresponding calculations are performed for dimensional quantities, and then
the results are non-dimensionalized before they are used in the transport equations. The following set of equations are
needed in this process.


NS
ρ̃ s ẽ s
ẽ = , (A.6)
s =1
ρ̃
ẽ s ( T̃ ) = ẽ 0s + ẽ sT rans + ẽ sRot + ẽ sV ib . (A.7)

In this equation, the formation enthalpy ẽ 0s at temperature 0 K is a known species property, whereas The specific transla-
tional energy ẽ sT rans , the specific rotational energy ẽ sRot and specific vibrational energy ẽ sV ib are as follows:

3
ẽ sT rans = R̃ s T̃ , s ∈ N S . , (A.8)
2

0, s∈
/ Hp ,
ẽ sRot = (A.9)
R̃ s T̃ , s ∈ Hp ,

⎨ 0, s∈
/ Hp ,
ẽ sV ib = ˜s (A.10)
⎩ R̃ s s
V ib
, s ∈ Hp ,
exp ( V ib
/ T̃ )−1

Both the specific rotational energy ẽ sRot and specific vibrational energy ẽ sV ib are zero for atoms, but have a non-zero value
for (diatomic) molecules s ∈ H p , where H p refers to a subset of N S representing only molecules. In the last equation, ˜ sV ib
is the vibrational characteristic temperature associated with the single vibrational mode of the corresponding species, and
it is a property of the species.

A.2. Equation of state relating pressure, density and temperature

The partial density ρ̃ s for each species s is related to the species mass fraction χ s :
ρs
χs = , (A.11)
ρ
with χ s = 1. Substituting into equation (2.5) allows to obtain the pressure p̃ = p̃ ( T̃ , ρ̃ , χ s ):


NS 
NS
p̃ = ρ̃ s R̃ s T̃ = ρ̃ χ s R̃ s T̃ . (A.12)
s =1 s =1

18
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

A.3. Transport properties: viscosity and thermal conductivity

The calculation of transport properties follows Magin and Degrez [25] and it is only briefly described here. Full details
can be found in the reference. The species mole fraction Xs ( Xs = 1) is defined using mixture molar mass M̃ as


Xs = χ s , (A.13)
M̃ s
 NS − 1
 χs
M̃ = . (A.14)
s =1
M̃ s

The viscosity is computed analogously to eqs. (3) in [25]:


NS
μ̃ = ζ˜s X̃s , (A.15a)
s =1


NS
G̃ sr ζ˜s = X̃r , r ∈ N S. (A.15b)
s =1

The viscosity matrix G̃ (X̃r , T̃ ) is given in appendix C of Magin and Degrez [25] (in the reference, the viscosity matrix is
denoted as G η ).
Both the translational thermal conductivity λ̃ T rans and the internal thermal conductivity λ̃int contribute to the overall
thermal conductivity:

k̃ = λ̃ T rans + λ̃int , (A.16a)



NS
λ̃T rans = ξ˜s X̃s , (A.16b)
s =1


NS
K̃ sr ξ˜s = X̃r , r ∈ NS , (A.16c)
s =1
 ρ̃ s (c̃ s + c̃ s )
Rot V ib
λ̃int = λ̃ Rot + λ̃ V ib = . (A.16d)
s∈H p r∈N S X̃r /D̃ sr

The thermal conductivity matrix K̃ (X̃r , T̃ ) is given in appendix C of Magin and Degrez [25] (in the reference, the thermal
conductivity matrix is denoted as G λh .
As can be seen from the equation above, the internal thermal conductivity is composed of a rotational thermal conduc-
tivity λ̃ Rot and a vibrational thermal conductivity λ̃ V ib (see eqs. (25) in [25]). In the equations above, c̃ sRot = dẽ sRot /d T̃ and
c̃ sV ib = dẽ sV ib /d T̃ are the rotational and vibrational species specific heats per unit mass, respectively. Note that the rotational
and vibrational contributions are non-zero only for molecules. The binary diffusion coefficients D̃ are given in appendix A
of Magin and Degrez [25] (in the reference, they are denoted as D).

A.4. Chemistry: species production

The ϒ elementary reactions of species s ∈ N S are considered, which are defined by the following equation:


NS 
NS
s s
νR ,η X̃ = νPs ,η X̃s , η ∈ ϒ , (A.17)
s =1 s =1
s s
where νR ,η and νP ,η are the stoichiometric coefficients for the η th elementary reaction for the reactants and products,
respectively. The species source term for species s ∈ N S is obtained from the law of mass action, together with an Arrhenius
law for the forward reaction rate k̃ f ,η . The forward and backward reaction rates k̃ f ,η , k̃b,η for reaction η ∈ ϒ are related by
the equilibrium constant for the ηth reaction, K̃ e,η ( T̃ ):
 N S  r ν s N S  r ν s


ϒ  ρ̃ R,η  ρ̃ P ,η
˜s= s
νP ,η − νR,η s
k̃ f ,η − k̃b,η , (A.18)
η =1 r =1
M̃ r r =1
M̃ r

19
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

 
ϑ̃η
k̃ f ,η = C̃ η T̃ βη exp , (A.19)
k̃ B T̃

k̃ f ,η = k̃b,η K̃ e,η . (A.20)

In the equations above, C̃ η is a positive constant, βη is a positive or negative exponent and ϑ̃η is the activation energy for
the η th reaction [38,39]. The equilibrium constant K̃ e,η is defined based on a reference pressure, p̃ eq = 1 Pa:

1 
NS
ln K̃ e,η = − νPs ,η − νR
s s s
,η g̃ ( T̃ , p̃ eq ) M̃ . (A.21)
R̃ T̃ s =1

In this equation, the Gibbs free energy g̃ s ( T̃ , p̃ eq ) is the sum of three contributions (translational Gibbs free energy
g̃ Ts ( T̃ ,
p̃ eq ), rotational Gibbs free energy g̃ sR and vibrational Gibbs free energy g̃ sV ):

g̃ s = g̃ Ts + g̃ sR + g̃ sV , (A.22)
⎛  3/2 ⎞
R̃ T̃ 2π M̃ s R̃ T̃
g̃ Ts ( T̃ p̃ eq )) = − R̃ s T̃ ln ⎝ ⎠, s ∈ NS , (A.23)
N A p̃ eq N 2A h̃2p

⎨ 0, s∈
/ Hp ,
 
g̃ sR ( T̃ ) = ˜ s σs (A.24)
⎩ R̃ T̃ ln
s R
, s ∈ Hp ,


⎨ 0, s∈
/ Hp ,
 
g̃ sV ( T̃ ) = ˜s (A.25)
⎩ − R̃ T̃ ln 1 − exp (−
s V
) , s ∈ Hp .

In these equations, h̃ p represents Planck’s constant and N A Avogadro’s number, ˜ sR is the rotational characteristic tem-
perature and σ̃ s represents the steric factor for species s. The necessary spectroscopic constants are taken from Gurvich
et al. [14]. Only for a molecule s ∈ H p , the specific rotational Gibbs free energy g̃ sR and specific vibrational Gibbs free
energy g̃ sV have a non-zero value.

Appendix B. DeepONet predictions for the chemical species

In this appendix, we demonstrate another DeepONet configuration to predict the densities ρ N , ρ O and ρ N O using as
inputs to the branch nets only the density ρ N 2 and/or ρ O 2 . For each of the target outputs, we build a separate DeepONet:
G ρN , G ρ O and G ρN O ,

• G ρN : ρ N 2 → ρ N uses ρ N 2 as the input of the branch net, and predicts ρ N .


• G ρ O : ρ O 2 → ρ O uses ρ O 2 as the input of the branch net, and predicts ρ O .
• G ρN O : (ρ N 2 , ρ O 2 ) → ρ N O uses both ρ N 2 and ρ O 2 as the input of the branch net, and predicts ρ N O .

Note that the physics requires knowledge of the velocity and temperature, as well as the other species. However, their
influence is all encoded in the solution of the operator that each DeepONet is trained to predict.
For simplicity, we show the three DeepONets with one schematic in Fig. 15. Note that we train the three DeepONets
independently. We also point out here that the data used for all densities are the logarithms of the original data.
The parameters used for the training are as follows:

• Hidden layers for both branch and trunk nets: 4 × 100.


• Activation function: adaptive ReLU;
• Learning rate: 8 × 10−4 ;
• Epochs: 70000.

To demonstrate the capacity of DeepONets, we use trained DeepONets to predict new conditions that are not in the train-
ing data, and some prediction examples are shown in Fig. 16. We show the predictions for the densities of the three species,
i.e., ρ N , ρ O , and ρ N O , in Fig. 16(a)-16(c) while we show the corresponding losses of training and testing in Fig. 16(d)-16(f).
Our DeepONet predictions have good agreement with the reference solutions obtained from CFD. However, DeepONets have
much less computational cost than CFD simulations, because the DeepONet prediction is only a forward-pass evaluation of
neural networks. Quantitatively, the relative mean-squared error is about 10−5 , while the speedup of DeepONet vs. CFD is
about 100,000 X.

20
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 15. Schematic of the DeepONets for the chemical species. Here we have three DeepONets: the first one uses ρ N 2 to predict ρ N , i.e., G ρN : ρN2 → ρN ,
the second one uses ρ O 2 to predict ρ O , i.e., G ρ O : ρ O 2 → ρ O , and the third uses ρ N 2 and ρ O 2 to predict ρ N O , i.e., G ρN O : (ρ N 2 , ρ O 2 ) → ρ N O .

Fig. 16. (a)-(c): Predictions of ρ N , ρ O , and ρN O using the corresponding trained DeepONets. (d)-(f): The training losses and test losses for the three
DeepONets, i.e., G ρN , G ρ O , and G ρN O .

Appendix C. Another type of series DeepM&Mnet architecture

In this Appendix we consider another type of series DeepM&Mnet architecture. Similar to the DeepM&Mnet considered
in subsection 4.2, we construct a DeepM&Mnet by interchanging the roles of the densities and the flow. This example will
highlight the benefits of regularization in ensuring the stability of the data assimilation problem.
j
We assume we have some data for the densities of the five chemical species, i.e., [ρk,data ], k = N 2 , O 2 , N , O , N O , j =
1, 2, . . . , nd , where nd is the number of data. The series DeepM&Mnet in this case is given as follows:

1. Similarly, we construct a neural network “NN” (to be trained) by taking x as the input and the densities of the five
species ρ N 2 , ρ O 2 , ρ N , ρ O , ρ N O as the outputs, which would be fed as the input of the trained DeepONets G U , T .
2. We now feed the outputs of the DeepONets G U , T , i.e., velocity U ∗ (x) and temperature T ∗ (x), as the inputs of the trained
DeepONets G ρN ,ρ O ,ρN ,ρ O ,ρN O and output the densities of the five species ρ N∗ 2 , ρ O ∗ , ρ∗ , ρ∗ , ρ∗ .
2 N O NO
2 2
3. Then, we define the total loss the sum of the mean square error between the outputs of the neural network “NN”, i.e.,
ρN 2 , ρ O 2 , ρN , ρ O , ρN O and the outputs of the DeepONets G ρN2 ,ρ O 2 ,ρN ,ρ O ,ρN O , i.e., ρN∗ 2 , ρ O∗ 2 , ρN∗ , ρ O∗ , ρN∗ O , and the
loss of the measurements, i.e., the mean square error between the data and the outputs of the neural network “NN”,
i.e., ρ N 2 , ρ O 2 , ρ N , ρ O , ρ N O .

21
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

Fig. 17. Schematic of the series DeepM&Mnet that employs some data of the densities of the chemical species.

Table 6
The values of  Re f − P red2 for the parallel DeepM&Mnet using 6 data for each variable, where Re f means the
reference solutions and P red means the prediction of the DeepONet output in the DeepM&Mnet architecture. ωG
is the weight for the global conservation term in the loss function. ω R is the weight for the regularization term
in the loss function. The result in bold is the best compared to the other results in the table.

ρN2 ρO 2 ρN ρO ρN O U T
ωG = 0, ω R =0 unstable unstable unstable unstable unstable unstable unstable
ωG = 1, ω R =0 unstable unstable unstable unstable unstable unstable unstable
ωG = 0, ω R = 10−5 unstable unstable unstable unstable unstable unstable unstable
ωG = 1, ω R = 10−5 2.73e-04 4.56e-04 5.96e-04 4.63e-06 1.18e-05 7.25e-06 1.88e-05
ωG = 0, ω R = 10−4 4.18e-04 2.09e-03 1.97e-03 3.51e-05 5.06e-05 4.37e-05 1.09e-04
ωG = 1, ω R = 10−4 1.50e-03 1.75e-03 2.59e-03 1.15e-05 5.29e-05 2.11e-05 5.22e-05

We show the schematic of this type of DeepM&Mnet in Fig. 17. The loss function also has the same formula as the equation
(4.1) with Ldata and Lop are given by


nd
 j
Ldata = ρk,data − ρk (x j )2 ,
j =1 k∈{ N 2 , O 2 , N , O , N O }
nop
 
Lop = ρk∗ (x j ) − ρk (x j )2 , (C.1)
j =1 k∈{ N 2 , O 2 , N , O , N O }


nG
LG = ρ U ∗ (x j ) − Const .2 ,
j =1

where ρ = ρ N 2 + ρ O 2 + ρ N + ρ O + ρ N O . We use 6 data points for the densities of all species and train the network with
the same parameters used in subsection 4.2.
The results are given in Table 6, where we report the mean square errors between the predictions of the DeepONet
output and the reference data with/without L 2 regularization/global conservation. We observe that without regularization,
even with the global conservation constraint, the training process is unstable. We obtain a stable training process with a
strong regularization (ω R = 10−4 ) or with a weak regularization (ω R = 10−5 ) if additionally we use the global conservation
constraint. Moreover, we observe from the table that the results obtained with a weak regularization (ω R = 10−5 ) and global
conservation have the best accuracy.

References

[1] J.D. Anderson Jr., Hypersonic and High-Temperature Gas Dynamics, American Institute of Aeronautics and Astronautics, 2006.
[2] J.J. Bertin, R.M. Cummings, Critical hypersonic aerothermodynamic phenomena, Annu. Rev. Fluid Mech. 38 (2006) 129–157.
[3] D.A. Buchta, T.A. Zaki, Observation-infused simulations of high-speed boundary-layer transition, J. Fluid Mech. 916 (2021) A44, https://doi.org/10.1017/
jfm.2021.172.
[4] M.K. Bull, Wall-pressure fluctuations beneath turbulent boundary layers: some reflections on forty years of research, J. Sound Vib. 190 (3) (1996)
299–315.

22
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

[5] T. Chen, H. Chen, Approximations of continuous functionals by neural networks with application to dynamic systems, IEEE Trans. Neural Netw. 4 (6)
(1993) 910–918.
[6] T. Chen, H. Chen, Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynam-
ical systems, IEEE Trans. Neural Netw. 6 (4) (1995) 911–917.
[7] N.T. Clemens, V. Narayanaswamy, Low-frequency unsteadiness of shock wave/turbulent boundary layer interactions, Annu. Rev. Fluid Mech. 46 (2014)
469–492.
[8] A. Coussement, O. Gicquel, J. Caudal, B. Fiorina, G. Degrez, Three-dimensional boundary conditions for numerical simulations of reactive compressible
flows with complex thermochemistry, J. Comput. Phys. 231 (17) (2012) 5571–5611.
[9] G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Syst. 5 (4) (1992) 455.
[10] M.S. Day, J.B. Bell, Numerical simulation of laminar reacting flows with complex chemistry, Combust. Theory Model. 4 (4) (2000) 535–556.
[11] J. del Águila Ferrandis, M. Triantafyllou, C. Chryssostomidis, G. Karniadakis, Learning functionals via LSTM neural networks for predicting vessel dy-
namics in extreme sea states, arXiv preprint, arXiv:1912.13382, 2019.
[12] L. Duan, M.P. Martín, Procedure to validate direct numerical simulations of wall-bounded turbulence including finite-rate reactions, AIAA J. 47 (1)
(2009) 244–251.
[13] A. Fedorov, Transition and stability of high-speed boundary layers, Annu. Rev. Fluid Mech. 43 (2011) 79–95.
[14] L.V. Gurvich, I.V. Veits, C.B. Alcock, Thermodynamic Properties of Individual Substances, O, H/D,T/, F, Cl, Br, I, He, Ne, Ar, Kr, Xe, Rn, S, N, P, and Their
Compounds, vol. 1, Hemisphere Publishing Corp., New York, 1989.
[15] R. Hilbert, F. Tap, H. El-Rabii, D. Thévenin, Impact of detailed chemistry and transport models on turbulent combustion simulations, Prog. Energy
Combust. Sci. 30 (1) (2004) 61–117.
[16] R. Jahanbakhshi, T.A. Zaki, Nonlinearly most dangerous disturbance for high-speed boundary-layer transition, J. Fluid Mech. 876 (2019) 87–121, https://
doi.org/10.1017/jfm.2019.527.
[17] G. Karniadakis, S. Sherwin, Spectral/hp Element Methods for Computational Fluid Dynamics, third edition, Oxford University Press, 2013.
[18] K. Kitamura, I. Men’shov, Y. Nakamura, Shock/shock and shock/boundary-layer interactions in two-body configurations, in: 35th AIAA Fluid Dynamics
Conference and Exhibit, 2005, p. 4893.
[19] L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Fluid Mechanics, Pergamon Press, 1959.
[20] S. Lanthaler, S. Mishra, G.E. Karniadakis, Error estimates for deeponets: a deep learning framework in infinite dimensions, arXiv preprint, arXiv:2102.
09618, 2021.
[21] H.W. Liepman, A. Roshko, Elements of Gas Dynamics, John Willey & Sons, 1957.
[22] X.-D. Liu, S. Osher, T. Chan, et al., Weighted essentially non-oscillatory schemes, J. Comput. Phys. 115 (1) (1994) 200–212.
[23] L. Lu, P. Jin, G. Pang, Z. Zhang, G.E. Karniadakis, Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators,
Nat. Mach. Intell. 3 (3) (2021) 218–229.
[24] L. Lu, X. Meng, Z. Mao, G.E. Karniadakis, DeepXDE: a deep learning library for solving differential equations, SIAM Rev. 63 (1) (2021) 208–228, https://
doi.org/10.1137/19M1274067.
[25] T.E. Magin, G. Degrez, Transport algorithms for partially ionized and unmagnetized plasmas, J. Comput. Phys. (ISSN 0021-9991) 198 (2) (2004) 424–449,
https://doi.org/10.1016/j.jcp.2004.01.012.
[26] T.E. Magin, L. Caillault, A. Bourdon, C.O. Laux, Nonequilibrium radiative heat flux modeling for the Huygens entry probe, J. Geophys. Res. 111 (E07S12)
(2006) 1–11.
[27] Z. Mao, A.D. Jagtap, G.E. Karniadakis, Physics-informed neural networks for high-speed flows, Comput. Methods Appl. Mech. Eng. 360 (2020) 112789.
[28] O. Marxen, T. Magin, G. Iaccarino, E.S. Shaqfeh, A high-order numerical method to study hypersonic boundary-layer instability including high-
temperature gas effects, Phys. Fluids 23 (8) (2011) 084108.
[29] O. Marxen, T.E. Magin, E.S. Shaqfeh, G. Iaccarino, A method for the direct numerical simulation of hypersonic boundary-layer instability with finite-rate
chemistry, J. Comput. Phys. 255 (2013) 572–589.
[30] G. Matheou, C. Pantano, P.E. Dimotakis, Verification of a fluid-dynamics solver using correlations with linear stability results, J. Comput. Phys. 227 (11)
(2008) 5385–5396.
[31] V. Mons, Q. Wang, T.A. Zaki, Kriging-enhanced ensemble variational data assimilation for scalar-source identification in turbulent environments, J. Com-
put. Phys. 398 (2019) 108856.
[32] S. Nagarajan, S.K. Lele, J.H. Ferziger, A robust high-order method for large eddy simulation, J. Comput. Phys. 191 (2003) 392–419.
[33] H.N. Najm, O.M. Knio, Modeling low Mach number reacting flow with detailed chemistry and transport, J. Sci. Comput. 25 (1–2) (2005) 263.
[34] H.N. Najm, P.S. Wyckoff, O.M. Knio, A semi-implicit numerical scheme for reacting flow: I. Stiff chemistry, J. Comput. Phys. 143 (2) (1998) 381–402.
[35] F. Nicoud, Conservative high-order finite-difference schemes for low-Mach number flows, J. Comput. Phys. 158 (1) (2000) 71–97.
[36] NVIDIA, NVIDIA CEO talk, SC’19, 44th min, https://www.youtube.com/watch?v=69neepdejzu, 2019.
[37] C. Park, Nonequilibrium Hypersonic Aerothermodynamics, 1989.
[38] C. Park, Review of chemical-kinetic problems of future NASA missions. I - Earth entries, J. Thermophys. Heat Transf. 7 (3) (1993) 385–398.
[39] C. Park, R.L. Jaffe, H. Partridge, Chemical-kinetic parameters of hyperbolic Earth entry, J. Thermophys. Heat Transf. 15 (1) (Jan. 2001) 76–90.
[40] J. Park, T.A. Zaki, Sensitivity of high-speed boundary-layer stability to base-flow distortion, J. Fluid Mech. 859 (2019) 476–515, https://doi.org/10.1017/
jfm.2018.819.
[41] A. Prakash, N. Parsons, X. Wang, X. Zhong, High-order shock-fitting methods for direct numerical simulation of hypersonic flow with chemical and
thermal nonequilibrium, J. Comput. Phys. 230 (23) (2011) 8474–8507.
[42] M. Raissi, P. Perdikaris, G.E. Karniadakis, Physics-informed neural networks: a deep learning framework for solving forward and inverse problems
involving nonlinear partial differential equations, J. Comput. Phys. 378 (2019) 686–707.
[43] M. Raissi, A. Yazdani, G.E. Karniadakis, Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations, Science 367 (6481)
(2020) 1026–1030.
[44] H. Reed, Role of chemical reactions in hypersonic flows, in: Advances in Laminar-Turbulent Transition Modeling, RTO-EN-AVT-151, 2009, p. 13.
[45] L. Torrey, J. Shavlik, Transfer learning, in: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques,
IGI Global, 2010, pp. 242–264.
[46] M. Wang, Q. Wang, T.A. Zaki, Discrete adjoint of fractional-step incompressible Navier-Stokes solver in curvilinear coordinates and application to data
assimilation, J. Comput. Phys. 396 (2019) 427–450, https://doi.org/10.1016/j.jcp.2019.06.065.
[47] Q. Wang, Y. Hasegawa, T.A. Zaki, Spatial reconstruction of steady scalar sources from remote measurements in turbulent flow, J. Fluid Mech. 870 (2019)
316–352, https://doi.org/10.1017/jfm.2019.241.
[48] W. Wang, H.C. Yee, B. Sjögreen, T. Magin, C.-W. Shu, Construction of low dissipative high-order well-balanced filter schemes for non-equilibrium flows,
J. Comput. Phys. 230 (11) (May 2011) 4316–4335.
[49] W.E. Williamson, Hypersonic flight testing, in: AIAA Paper, AIAA-92–3989, 1992.
[50] L. Zanus, F.M. Miró, F. Pinna, Parabolized stability analysis of chemically reacting boundary-layer flows in equilibrium conditions, Proc. Inst. Mech. Eng.,
G J. Aerosp. Eng. 234 (1) (2020) 79–95.

23
Z. Mao, L. Lu, O. Marxen et al. Journal of Computational Physics 447 (2021) 110698

[51] X. Zhang, C.-W. Shu, Positivity-preserving high order discontinuous Galerkin schemes for compressible Euler equations with source terms, J. Comput.
Phys. 230 (4) (2011) 1238–1248.
[52] X. Zhong, Additive semi-implicit Runge–Kutta methods for computing high-speed nonequilibrium reactive flows, J. Comput. Phys. 128 (1) (1996) 19–31.

24

You might also like