0% found this document useful (0 votes)
46 views

Numerical Investigation of Minimum Drag Profiles in Laminar Flow Using Deep Learning Surrogates

This document discusses using deep learning surrogate models to efficiently predict fluid flow and optimize aerodynamic shapes. Specifically, it trains U-Net neural networks on high-fidelity fluid dynamics data to predict 2D laminar flow fields. It then uses the neural networks as surrogate models within an optimization framework to find the drag-minimal profile for a fixed cross-sectional area. Both level-set and Bézier-curve methods are used to parameterize the shape. The optimized shapes and drag values from the neural network models agree well with results from computational fluid dynamics solvers, demonstrating the potential of using deep learning to enable fast and accurate aerodynamic design optimization.

Uploaded by

Baran Denli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Numerical Investigation of Minimum Drag Profiles in Laminar Flow Using Deep Learning Surrogates

This document discusses using deep learning surrogate models to efficiently predict fluid flow and optimize aerodynamic shapes. Specifically, it trains U-Net neural networks on high-fidelity fluid dynamics data to predict 2D laminar flow fields. It then uses the neural networks as surrogate models within an optimization framework to find the drag-minimal profile for a fixed cross-sectional area. Both level-set and Bézier-curve methods are used to parameterize the shape. The optimized shapes and drag values from the neural network models agree well with results from computational fluid dynamics solvers, demonstrating the potential of using deep learning to enable fast and accurate aerodynamic design optimization.

Uploaded by

Baran Denli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Numerical investigation of minimum drag profiles

in laminar flow using deep learning surrogates


arXiv:2009.14339v1 [physics.flu-dyn] 29 Sep 2020

Li-Wei Chen, Berkay Alp Cakal, Xiangyu Hu, Nils Thuerey


September 2020

Abstract
Efficiently predicting the flowfield and load in aerodynamic shape op-
timisation remains a highly challenging and relevant task. Deep learn-
ing methods have been of particular interest for such problems, due to
their success for solving inverse problems in other fields. In the present
study, U-net based deep neural network (DNN) models are trained with
high-fidelity datasets to infer flow fields, and then employed as surro-
gate models to carry out the shape optimisation problem, i.e. to find a
drag minimal profile with a fixed cross-section area subjected to a two-
dimensional steady laminar flow. A level-set method as well as Bézier-
curve method are used to parameterise the shape, while trained neural
networks in conjunction with automatic differentiation are utilized to cal-
culate the gradient flow in the optimisation framework. The optimised
shapes and drag force values calculated from the flowfields predicted by
DNN models agree well with reference data obtained via a Navier-Stokes
solver and from the literature, which demonstrates that the DNN mod-
els are capable of predicting not only flowfield but also yield satisfactory
aerodynamic forces. This is particularly promising as the DNNs were not
specifically trained to infer aerodynamic forces. In conjunction with the
fast runtime, the DNN-based optimisation framework shows promise for
general aerodynamic design problems.

1 Introduction
Owing to the importance in a wide range of fundamental studies and industrial
applications, a significant effort has been made to study the shape optimisation
for minimising aerodynamic drag over a bluff body [1, 2]. The deployment of
computational fluid dynamics tools has played an important role in these optimi-
sation problems [3]. While a direct optimisation via high-fidelity computational
fluid dynamics (CFD) models gives reliable results, the high computational cost
of each simulation, e.g., for Reynolds-averaged Navier-Stokes formulations, and
the large amount of evaluations needed, lead to assessments that such optimi-
sations are still not feasible for the practical engineering [4]. When considering
gradient-based optimisation, the adjoint method provides an effective way to

1
calculate the gradients of an objective function w.r.t. design variables and al-
leviates the computational workload greatly [5–9], but the number of required
adjoint CFD simulations is typically still prohibitively expensive when multi-
ple optimisation objectives are considered [10]. In gradient-free methods (e.g.
genetic algorithm), the computational cost rises dramatically as the number
of design variables is increased, especially when the convergence requirement
is tightened [11]. Therefore, advances in terms of surrogate-based optimisation
are of central importance for both gradient-based and gradient-free optimisation
methods [12, 13].
Recently, state-of-the-art deep learning methods and architectures have been
successfully developed to achieve fast prediction of fluid physics. Among others,
Bhatnagar et al. [14] developed convolutional neural network (CNN) method for
aerodynamics flowfields, while others studied the predictability of laminar flows
[15], or employed graph neural networks to predict transonic flows [16]. For
the inference of Reynolds-averaged Navier–Stokes (RANS) solutions, a U-net
based deep learning model was proposed and shown to be significantly faster
than a conventional CFD solver [17]. These promising achievements open up
new possibilities of applying DNN-based flow solvers in the aerodynamic shape
optimisation. In the present study we focus on evaluating the accuracy and
performance of DNN-base surrogates in laminar flow regimes.
Modern deep learning methods are also giving new impetus to aerodynamic
shape optimisation research. Viquerat and Hachem [18], evaluated quantitative
predictions such as drag forces using a VGG-like convolutional neural network.
To improve the surrogate-based optimisation, Li et al. [19] proposed a new sam-
pling method for airfoils and wings based on a generative adversarial network
(GAN). Renganathan et al. [20] designed a surrogate-based framework by train-
ing a deep neural network (DNN) that is used for gradient-based and gradient-
free optimisations. In these studies, the neural network is mainly trained to
construct the mapping between shape parameters and the aerodynamic quanti-
ties (e.g. lift and drag coefficients), but no flowfield information can be obtained
from the network models. We instead demonstrate how deep learning models
that were not specifically trained to infer the parameters to be minimized, can
be used in optimisation problems.
To understand the mechanisms underlying drag reduction and to develop
optimisation algorithms, analytical and computational work have been specifi-
cally performed for Stokes’ flow and laminar steady flow over a body [21–27]. As
far back as 1970s, Pironneau [21] analysed the minimum drag shape for a given
volume in Stokes flow, and later for the Navier-Stokes equations [22]. By using
the adjoint variable approach, Kim and Kim [25] investigated the minimal drag
profile for a fixed cross-section area in the two-dimensional laminar flow with
the Reynolds number range of Re = 1 to 40. More recently Katamine et al.
[26] studied the same problem at two Reynolds numbers Re = 0.1 and Re = 40.
With theoretical and numerical approaches, Glowinski and Pironneau [23, 24]
looked for the axisymmetric profile of given area and smallest drag in a uniform
incompressible laminar viscous flow at Reynolds numbers between 100 and 105
and obtained a drag-minimal shape with a wedge of angle 90◦ at the front end

2
and a cusp rear end from an initial slender profile. Although the laminar flow
regimes are well studied, due to the separation and nonlinear nature of the fluid,
it can be challenging for surrogate models to predict the drag-minimal shape as
well as aerodynamic forces. To our knowledge, no previous studies exists that
investigate this topic and quantitatively assess the results in the context of deep
learning surrogates.
In the present paper, we adopt an approach for the U-net based flowfield in-
ference [17] and use the trained deep neural network as flow solver in the shape
optimisation. In comparison to conventional surrogate models [28] and other
optimisation work involving deep learning [19, 20, 29], we make use of a generic
model that infers flow solutions: in our case it produces fluid pressure and ve-
locity as field quantities. I.e., given encoded boundary conditions and shape,
the DNN surrogate produces a flowfield solution, from which the aerodynamic
forces are calculated. Thus, both the flowfield and aerodynamic forces can be
obtained during the optimisation. As we can fully control and generate arbi-
trary amounts of high-quality flow samples, we can train our models in a fully
supervised manner. We use the trained DNN models in the shape optimisa-
tion to find the drag minimal profile in the two-dimensional steady laminar flow
regime for a fixed cross-section area, and evaluate results w.r.t. shapes obtained
using a full Navier-Stokes flow solver in the same optimisation framework. Both
level-set and Bézier-curve based methods are employed for shape parameterisa-
tion. The implementation utilizes the automatic differentiation package of the
PyTorch package [30], so the gradient flow driving the evolution of shapes can
be directly calculated [31]. Here DNN-based surrogate models show particular
promise as they allow for a seamless integration into the optimisation algorithms
that are commonly used for training DNNs.
The purpose of the present work is to demonstrate the capability of deep
learning techniques for robust and efficient shape optimisation, and for achiev-
ing an improved understanding of the inference of the fundamental phenomena
involved in these kinds of flows. This paper is organized as follows. The math-
ematical formulation and numerical method are briefly presented in §2. The
neural network architecture and training procedure will be described in §3. The
detailed experiments and results are then given in §4 and concluding remarks
in §5.

2 Methodology
We first explain and validate our approach for computing the fluid flow envi-
ronment in which shapes should be optimised. Afterwards, we describe two
different shape parameterisiations, a level-set and a Bézier curve based one,
which we employ for our optimisation results.

3
Figure 1: Drag coefficients from ReD = 0.1 to 40. Surface integral values from
OpenFOAM simulations are plot in black curves. Results based on re-sampled
points on Cartesian grids with the resolution of 128×128, 256×256 and 512×512
are plot with red, black and green circles, respectively. All data are compared
with the experimental measurements by Tritton (1959), which are shown by
blue squares.

4.0
4 Present result
3.5 SU2 code
3
3.0
2 ReD = 1
ReD = 10 2.5
1 ReD = 1
2.0
Cp

τw

0 1.5
ReD = 40 ReD = 10
−1 1.0

−2 Present result 0.5


SU2 c de
−3 Park et al. (1998) 0.0 ReD = 40
Dannis & Chang (1970)
−4 −0.5
0 20 40 60 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180
θ [degree] θ [degree]
(a) Pressure coefficient distribution (b) Wall shear stress distribution

Figure 2: Pressure coefficient and wall shear stress distributions.

4
2.1 Numerical procedure
We consider two-dimensional incompressible steady laminar flows over profiles
of given area and look for the minimal drag design. The profile is initialised
with a circular cylinder and updated by utilizing steepest gradient descent as
optimisation algorithm. The Reynolds number ReD in the present work is based
on the diameter of the initial circular cylinder. It can be also interpreted that
the the length scale is defined
p as the equivalent diameter for given area S of an
arbitrary shape, i.e. D = 2 S/π. In the present work, D ≈ 0.39424[m] is used.
To calculate the flowfield around the profile at each iteration of the optimi-
sation, two methods are employed in the present study. The first approach is a
conventional steady solver of Navier-Stokes equations, i.e. simpleFoam within
the open-source package OpenFOAM (from https://openfoam.org/). The sec-
ond one is the deep learning model [17], which is trained with flowfield datasets
generated by simpleFoam that consists of several thousand profiles at a chosen
range of Reynolds numbers. More details about the architecture of the neural
network, data generation, training and performance will be discussed in §3.
SimpleFoam is a steady-state solver for incompressible, turbulent flow using
the Semi-Implicit Method for Pressure Linked Equations (known as “SIMPLE”)
[32]. The governing equations are numerically solved by a second-order finite
volume method [33]. The unstructured mesh in the fluid domain is generated
using open source code Gmsh version 4.4.1. To properly resolve the viscous
flow, the mesh resolution is refined near the wall of the profile and the minimum
mesh size is set as ∼ 6 × 10−3 D, where D is the equivalent circular diameter
of the profile. The outer boundary, where the freestream boundary condition is
imposed, is set as 50[m] (∼ 32D) away from the wall (noted as “OpenFOAM
DOM50”). The effects of domain size are assessed by performing additional
simulations with domain size of 25[m] and 100[m] away from the wall (noted as
“OpenFOAM DOM25” and “OpenFOAM DOM100”, respectively). Here the
drag coefficient Cd is defined as the drag force divided by the projected length
and dynamic head. As shown in figure 1, from ReD = 0.1 to ReD = 40, the
total Cd as well as the viscous Cd,v and inviscid Cd,p parts obtained from three
different domains almost collapse. Although small differences when ReD < 0.5
are observed, the predictions in the interested range [1, 40] are consistent and
not sensitive to the domain size. The computation runs for 6000 iterations to
obtain a converged state.
To validate the setup, we compare our numerical results and literature data
in terms of the surface pressure coefficient and wall shear. As sanity checks for
the numerical setup, we also run SU2 [see 34] with the same mesh for com-
parisons. Figure 2(a) shows the distribution of the surface pressure coefficient
2
(pw − p∞ )/0.5ρ∞ U∞ at ReD = 1, 10 and 40. Here, θ is defined as the angle
of the intersection of the horizontal line and the vector of the center to a local
surface point, so that θ = 0◦ is the stagnation point in the upwind side and
θ = 180◦ in the downwind side. Only half of the surface distribution is shown
due to symmetry. The results agree well with the numerical results by Dennis
and Chang [35], and the results for OpenFOAM and SU2 collapse. In figure

5
2(b), the results for OpenFOAM compare well with the one predicted SU2. The
drag coefficients in the Reynolds numbers range from 0.1 to 40 agree well with
the experimental data by Tritton et al. [36] in figure 1, which further supports
that the current setup and the solver produce reliable data.
To facilitate neural networks with convolution layers, the velocity and pres-
sure field from OpenFOAM in the region of interest are re-sampled with a
uniform Cartesian grid in a rectangular domain [−1, 1]2 (≈ [−1.27D, 1.27D]2 ).
A typical resolution used in the present study is 128 × 128, corresponding to
the grid size of 0.02D. As also shown in figure 1, the effect of the resolution
of re-sampling on the drag calculation has been studied. The details about the
force calculation on Cartesian grids are given in §2.2.1. Results with three dif-
ferent resolutions shown as colored symbols, i.e. 1282 , 2562 , and 5122 , compare
favourably with the surface integral values based on the original mesh in Open-
FOAM. Therefore, sampled fields on the 128 × 128 grid will be used in the deep
learning framework and optimisation unless otherwise noted.

2.2 Shape parameterisation


2.2.1 Level-set method
The level set method proposed by Osher and Sethian [37] is a technique that
tracks an interface implicitly and has been widely used in fluid physics, image
segmentation, computer vision as well as shape optimisation [38–40]. The level
set function φ is a higher dimensional auxiliary scalar function, the zero-level
set contour of which is the implicit representation of a time-dependent surface
Γ(t) = {x : φ(x) = 0}. Here, let D ∈ RN be a reference domain, x ∈ D and Ω is
a body created by the enclosed surface Γ. Specifically in the present study, the
domain D is referred to the sampled Cartesian grid in the rectangular region,
and N is 2 as we focus on two-dimensional problems. The level set function φ
is defined by a signed distance function (SDF):

−d(Γ(t)) x ∈ Ω

φ= 0 x ∈ ∂Ω (or Γ) (1)

d(Γ(t)) x∈D−Ω

where d(Γ(t)) denotes the Euclidean distance from x to Γ. R


The arcRlength c and area S of the body are formulated as c = D δ (φ)|∇φ|ds
and S = D H (−φ)ds. To make the operators differentiable, in the above,
we use smoothed Heaviside and Dirac Delta function H (x) = 1+e1−x/ and
δ (x) = ∂x 1+e1−x/ , respectively.  is a small positive number and chosen as twice
of the grid size [41]. Then, the aerodynamic forces due to pressure distribution
and viscous effect are described as
Z Z
Fpressure = (pn)dl = (pn)δ (φ)|∇φ|ds (2)
∂Ω D
Z Z
Fviscous = (µn × ω)dl = (µn × ω)δ (φ)|∇φ|ds. (3)
∂Ω D

6
∇φ
Here, n is the unit normal vector, n = k∇φk , p is the pressure, µ is the dynamic
viscosity, and ω = ∇ × v is the vorticity with v being the velocity. A nearest
neighbour method is used to extrapolate values of pressure and vorticity inside
the shape Ω. Then, the drag force is considered as the loss in the optimisation,
i.e.
L = Fpressure · îx + Fviscous · îx (4)
where îx is the unit vector in the direction of x axis.
The minimisation of equation (4) is solved by the following equation:

∂φ
+ Vn |∇φ|= 0 (5)
∂τ
Here, the normal velocity is defined as Vn = ∂L ∂φ . At every iteration, the Eikonal
equation is solved numerically with the fast marching method to ensure |∇φ|≈
1.0 [42]. Then, we have ∂φ ∂L
∂τ ∝ − ∂φ , which is a gradient flow that minimises the
loss function L and drives the evolution of the profile [31, 43]. For a more rigid
mathematical analysis we refer to Kraft [31]. In the present work, the automatic
differentiation fuctionality of PyTorch is utilized to efficiently minimize equation
(4) via gradient descent. Note that the level-set based surface representation and
optimisation algorithm are relatively independent modules, and can be coupled
with any flow solver, such as OpenFOAM and SU2, so long as the solver provides
a sampled flowfield on the Cartesian grid (e.g. 128 × 128) at an iteration in the
optimisation. We will leverage this flexibility by replacing the numerical solver
with a surrogate model represented by a trained neural network below.

2.2.2 Bézier-curve based parameterisation


Bézier curve based parametric shape parameterisation is a widely accepted tech-
nique in aerodynamic studies [44–46]. This work utilizes two Bézier curves,
representing upper and lower surfaces of the profile denoted with superscript
k={u,l}. Control points Pki ∈ D are the parameters of the optimisation frame-
work. The Bézier curves are defined via following equation:
n  
k
X n
B (t) = ti (1 − t)n−i Pki (6)
i=0
i

where t ∈ [0, 1] denotes the sample points along the curves. First and the last
control points of each curve share the same parameters to construct the closure
Ω of the profile.
A binary labeling of the Cartesian grid D is performed as
(
1 x∈Ω
χ= (7)
0 x∈D−Ω

where χ is the binary mask of the profile and x is the coordinate of a point on
the Cartesian grid. The normal vector n is obtained via applying a convolution

7
with a 3 × 3 Sobel operator kernel on χ. Then, forces are calculated as
X
Fpressure = (pn)i ∆li (8)
i∈D−Ω
X
Fviscous = (µn × ω)i ∆li (9)
i∈D−Ω

where i is the index of a point outside the profile and ∆li is the grid size at the
point i. Thereby, drag L is calculated using equation (4). As for the level set
∂L
representation, the shape gradient ∂P k is computed via automatic differentiation
i
in order to drive the shape evolution to minimize L.

3 Neural network architecture and training pro-


cedure
3.1 Architecture
The neural network model is based on a U-Net architecture [47], a convolutional
network originally used for the fast and precise segmentation of images. Fol-
lowing the methodology of previous work [17], we consider the inflow boundary
conditions (i.e. u∞ , v∞ ) and the shape of profiles (i.e. the binary mask) on
the Cartesian grid 128 × 128 as three input channels. In the encoding part, 7
convolutional blocks are used to transform the input (i.e. 1282 × 3) into a sin-
gle data point with 512 features. The decoder part of the network is designed
symmetrically with another 7 layers in order to reconstruct the outputs with
the desired dimension, i.e. 1282 × 3, corresponding to the flowfield variables
[p, u, v] on the Cartesian grid 128 × 128. Leaky ReLU activation functions with
a slope of 0.2 is used in the encoding layers, and regular ReLU activations in
the decoding layers.
In order to access the performance of the deep learning models, we have
tested three different models with varying weight counts of 122k, 1.9m and
30.9m, respectively, which are later referred to as small, medium and large-scale
networks.

3.2 Dataset generation


For the training dataset, it is important to have a comprehensive coverage of
the space of targeted solutions. In the present study, we utilize the parametric
Bézier curve defined by equation (6) to generate randomized symmetric shape
profiles subject to a fixed area constraint S.
To parameterise the upper surface of the profile, two points at the leading
and trailing edges are fixed and 4 control points are positioned in different
regions. As depicted in figure 3(a), the region of interest is divided into 4
columns separated by the border lines, and each control point of the upper
Bézier curve is only allowed to be located within its corresponding column-wise

8
(a) Bézier control points (b) Randomly generated shapes

Figure 3: Shape generation using two Bézier curves. The region of interest
is divided into 4 columns, and each column-wise region is further split into 5
sub-regions.

region. Each column-wise region is further split into 5 sub-regions to produce


diversified profiles. The subregions give 54 = 625 possible permutations, with
control points being placed randomly in each subregion. This procedure is
repeated for 4 times, in total it produces 4 ∗ 625 = 2500 Bézier curves. Figure
3(b) shows some examples from this set.
Based on these 2500 geometries, we then generate three sets of training data,
as summarised in table 1.
(1) We run OpenFOAM with fixed ReD = 1 for all of the 2500 profiles to
obtain 2500 flowfields, denoted as “Dataset-1”.
(2) The second dataset is similar but all of the 2500 simulations are con-
ducted at ReD = 40 (“Dataset-40”).
(3) The third dataset is generated to cover a continuous range of Reynolds
numbers, in order to capture a space of solutions that not only varies over the
immersed shapes, but additionally captures a dimensions of varying flow physics
with respect to a chosen Reynolds number. For this, we run a simulation by
randomly choosing a profile Ω∗i among 2500 geometries and a Reynolds number
in the range of Re∗D ∈ [0.5, 42.5]. As we know that drag scales logarithmically
w.r.t. Reynolds number, we similarly employ a logarithmic sampling for the
Reynolds number dimension. We use a uniform distribution random variable
κ ∈ [log 0.5, log 42.5], leading to a Re∗D = 10κ uniformly distributed in log scale.
In total we have obtained 3028 flowfields, which we refer to as “Dataset-Range”.
As shown in figure 4, the distribution of all the flowfield samples from “Dateset-
Range” on the Ω∗i − Re∗D map, with Re∗D in log scale. It worth noting that
there are 759 flowfield samples in the range of Re∗D ∈ [0.5, 1.5] which are shown

9
Name # of flowfields Re NN models
Dataset-1 2500 1 small, medium & large
Dataset-40 2500 40 small, medium & large
Dataset-Range 3028 0.5-42.5 large

Table 1: Three datasets for training the neural network models.

in red, 287 samples with Re∗D ∈ [8, 12] colored in green and 66 samples with
Re∗D ∈ [38, 42] in blue.

3.3 Pre-processing
Proper pre-processing of the data is crucial for obtaining a high inference ac-
curacy from the trained neural networks. Firstly, the nondimensional flowfield
variables are calculated by
2
p̂i = (pi − pi,mean )/U∞,i ,
ûi = ui /U∞,i ,
v̂i = vi /U∞,i .
Here, i denotes the i-th flowfield sample
p in the dataset, pmean the simple arith-
metic mean pressure, and U∞ = u2∞ + v∞ 2 the magnitude of the freestream

velocity.
As the second step, all input channels and target flowfield data in the training
dataset are normalised to the range of [−1, 1] in order to minimise the errors
from limited precision in the training phase. To do so, we need to find the
maximum absolute values for each flow variable in the entire training dataset,
i.e. |p̂|max , |û|max and |v̂max |. Similarly, the maximum absolute values of the
freestream velocity components are |u∞ |max and |v∞ |max . Then we get the final
normalised flowfield variables in the following form:
p̃i = p̂i /|p̂|max
ũi = ûi /|û|max
ṽi = v̂i /|v̂|max
and the normalised freestream velocities used for input channels are
ũi = ui /max(|u∞ |max , 1 × 10−18 )
ṽi = vi /max(|v∞ |max , 1 × 10−18 )
The freestream velocities appear in the boundary conditions, on which the solu-
tion globally depends, and should be readily available spatially and throughout
the different layers. Thus, freestream conditions and the shape of the profile are
encoded in a 1282 × 3 grid of values. The magnitude of the freestream velocity
is chosen such that it leads to a desired Reynolds number.

10
Figure 4: Distribution of flowfield samples from “Dataset-Range” on the Ω∗i −
Re∗D map. The indices of geometries Ω∗i are from 0 to 2499. The red symbols
denote the flowfield samples with Re∗D ∈ [0.5, 1.5], the green ones with Re∗D ∈
[8, 12] and the blue ones with Re∗D ∈ [38, 42].

11
3.4 Training details
The neural network is trained with the Adam optimiser in PyTorch [48]. A L1
difference L = |ytruth − yprediction | is used for the loss calculation. For most
of the cases, the training runs converge after 100k iterations with a batch size
of 10 (unless otherwise mentioned). An 80% to 20% split is used for training
and validation sets, respectively. The validation set allows for an unbiased
evaluation of the quality of the trained model during training, for example, to
detect overfitting. In addition, as learning rate decay is used, the variance of the
learning iterations gradually decreases, which lets the training process fine-tune
the final state of the model.
Figure 5 shows the training and validation losses for three models that are
trained using “Dataset-1”, i.e. small-scale, medium-scale and large-scale models,
respectively. All the three models converge at stable levels of training and
validation loss after 500 epochs. Looking at the training evolution for the small-
scale model in figure 5(a), numerical oscillation can be seen in the early stage of
the validation loss history, which is most likely caused by the smaller number of
free parameters in the small-scale network. In contrast, the medium- and large-
scale models show a smoother loss evolution, and the gap between validation
and training losses indicates a slight overfitting as shown in figures 5(b) and 5(c).
Although the training of the large-scale model exhibits a spike in the loss value
at early stage due to an instantaneous pathological configuration of mini-batch
data and learned state, the network recovers, and eventually the converges to
lower loss values. Similar spikes can be seen in some of the other training runs,
and could potentially be removed via gradient-clipping algorithms, which we,
however, did not find necessary to achieve reliable convergence.
Figure 6 presents the training and validation losses for three models trained
with “Dataset-40”. Similarly, convergence can be achieved after 500 epochs.
Compared to the training evolution at ReD = 1, the models ReD = 40 have
smaller gaps between training and validation losses, indicating that the over-
fitting is less pronounced than for ReD = 1. We believe this is caused by the
smoother and more diffusive flowfields at ReD = 1 (close to Stokes flow), in con-
trast to the additional complexity of the solutions at ReD = 40, which already
exhibit separation bubbles.
We use “Dataset-Range” to train the model for a continuous range of Reynolds
numbers. As this task is particularly challenging, we directly focus on the large
scale network that has 30.9m weights. To achieve better convergence for this
case, we run 300k iterations with the batch size of 10, which leads to more than
1200 epochs. As shown in figure 7 training and validation losses converge to
stable levels, and do not exhibit overfitting over epochs. The final values are
2.80 × 10−4 and 6.05 × 10−4 , respectively.
To summarise, having conducted the above-mentioned training, we obtain
seven neural network models, i.e. models of three network sizes for “Dataset-1”
and “Dataset-40” and a ranged model trained with “Dataset-Range” as list in
table 1. These neural networks will be used as surrogate models in the op-
timisation in the next section. We will also compare the results from neural

12
0.030
Loss
Loss val.
0.025

0.020

Loss 0.015

0.010

0.005

0.000
0 100 200 300 400 500
Epochs

(a) Small-scale neural network

0.030
Loss
Loss val.
0.025

0.020
Loss

0.015

0.010

0.005

0.000
0 100 200 300 400 500
Epochs

(b) Medium-scale neural network

0.030
Loss
Loss val.
0.025

0.020
Loss

0.015

0.010

0.005

0.000
0 100 200 300 400 500
Epochs

(c) Large-scale neural network

Figure 5: Training (in blue) and validation (in orange) losses of three different
scales of models trained with “Dataset-1”.

13
0.030
Loss
Loss val.
0.025

0.020

Loss 0.015

0.010

0.005

0.000
0 100 200 300 400 500
Epochs

(a) Small-scale neural network

0.030
Loss
Loss val.
0.025

0.020
Loss

0.015

0.010

0.005

0.000
0 100 200 300 400 500
Epochs

(b) Medium-scale neural network

0.030
Loss
Loss val.
0.025

0.020
Loss

0.015

0.010

0.005

0.000
0 100 200 300 400 500
Epochs

(c) Large-scale neural network

Figure 6: Training (in blue) and validation (in orange) losses of three different
scales of models trained with “Dataset-40”.

14
0.025
Loss
Loss val.

0.020

0.015
Loss

0.010

0.005

0.000
0 200 400 600 800 1000 1200
Epochs
Figure 7: Training (in blue) and validation (in orange) losses of large-scale model
trained with “Dataset-Range”.

15
network models with corresponding optimisations conducted with the Open-
FOAM solver, and evaluate the performance and accuracy of the optimisation
runs.

4 Shape optimisation results


The initial shape for the optimisation is a circular cylinder with a diameter D ≈
0.39424[m]. The integral value of the drag force using equation (4) is adopted
as the objective function. The mathematical formula of the optimisation for the
shape Ω bounded by curve Γ, the surface of the profile, is expressed as

min Drag(Ω)
subject to Area S(Ω) = S0
Z
1
Barycenter b(Ω) = xds = (0, 0)
S(Ω) Ω
For the level-set representation, the profile Ω is the region where φ ≤ 0 and
the constrained optimisation problem is solved as follows:
(1) Initialise level set function φ such that the initial shape (i.e. a circular
cylinder) is corresponding to φ = 0.
(2) For a given φ, calculate drag (i.e. loss L) using equations (2-4). Ter-
minate if the optimisation converges, e.g. drag history reaches a statistically
steady state.
(3) Calculate the gradient ∂L ∂φ . Consider an unconstrained minimisation
problem and solve equation (5) as follows:
∂L
φn+1 ⇐= φn − ∆τ k∇φk
∂φ
In practice, we update φ using the second-order Runge-Kutta method, and
discretise the convection term with a first-order upwind scheme [39]. We assume
derivatives of the flowfield variables (i.e. pressure and velocity) are significantly
smaller than those w.r.t. to the shape. Hence, we treat both fields as constants
for each step of the shape evolution. To ensure the correct search direction for
optimisation, we use a relatively small pseudo time step ∆τ , which is calculated
with a CFL number of 0.8.
(4) To ensure k∇φk≈ 1, fast marching method is used to solve the Eikonal
equation [42]. R
(5) The area of the shape Ω is obtained by S = D H (−(φ + η))ds, where
η is an adjustable constant. We optimise η such that kS − S0 k< . Then, we
update φn+1 ⇐= φn+1 + η.
(6) Check if the barycenter is at the origin: kb−ok< . If not, solve equation
(5) to update φn+1 by replacing Vn with a translating constant velocity so that
the barycenter of the shape Ω moves towards the origin. Continue with (2).
Our main focus lies on level-set representations, while the Bézier curve pa-
rameterisation with reduced degrees of freedom is used for comparison pur-
poses. When Bézier curves are used, the constrained optimisation defers from

16
12 12

10 10
Cd = Cd p + Cd v
, ,

Cd = Cd, p + Cd, v
8 8

Cd, v Cd v ,

6
Cd

Cd
6

4 4

2 Cd, p 2
Neuralnetwork 1282 Cd p ,
Re-sampled OpenFOAM 1282 Re-sampled OpenFOAM 1282
OpenFOAM OpenFOAM
0 0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200

Iterations Iterations

(a) OpenFOAM (b) Small-scale neural network

12 12

10
Cd = Cd p + Cd v
, ,
10
Cd = Cd p + Cd v
, ,

8 8

Cd v , Cd v ,
Cd

Cd

6 6

4 4

2
Neuralnetwork 1282 Cd p , 2
Neuralnetwork 1282 Cd p ,
Re-sampled OpenFOAM 1282 Re-sampled OpenFOAM 1282
OpenFOAM OpenFOAM
0 0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200

Iterations Iterations

(c) Medium-scale neural network (d) Large-scale neural network

Figure 8: Optimisation histories at ReD = 1. The black solid lines denote


the results using neural network models trained with “Dataset-1” and the blue
solid lines denote the results from OpenFOAM. Results calculated with the re-
sampled flowfields on the 128 × 128 Cartesian grid are denoted by 1282 . The
red cross symbols represent the OpenFOAM ’s results obtained with its native
postprocessing tool.

the above-mentioned loop in the following: In (1-3), the coordinates of Bézier


curve control points are used as the design variables to be initialised and up-
dated. In (5) and (6), the area of Ω and barycenter are calculated based on the
region enclosed by the Bézier curve, where the inner region is 1 and outer region
is 0.
In the optimisation experiments the flowfield solvers used are OpenFOAM
(as baseline), small, medium and large-scale neural network models, respectively.
As additional validation for the optimisation procedure, we also compare to
additional runs based on the Bézier curve parameterisation with a medium-
scale neural network model.

4.1 Optimisation experiment at ReD = 1


Figures 8(a-d) present the drag coefficients over 200 optimisation iterations using
OpenFOAM solver and three neural network models. Here, the drag coefficient

17
Figure 9: The converged shapes at ReD = 1 with intermediate states predicted
by large-scale NN model at every 10th iteration.

18
Cd is defined as drag divided by the projected length of the initial cylinder
and dynamic head. The same definition is used for all of other experiments
in the present paper. As the ground truth, figure 8(a) shows the case which
uses the OpenFOAM solver in the optimisation. The history of drag values,
shown in blue, is calculated based on the re-sampled data on the Cartesian
grid (i.e. 1282 ). For comparison, the drag values obtained from the surface
integral in the OpenFOAM ’s native post-processing code are shown with red
markers. As can be seen in figure 8(a), after convergence of the optimisation
the total drag drops 6.3% from 10.43 to 9.78. To further break it down, the
inviscid part decreases significantly from 5.20 to 2.50 (∼ 51.8%) while the viscous
part gradually increases from 5.23 to 7.27 (∼ 31.0%). This is associated with
the elongation of the shape from a circular cylinder to an ”oval”, eventually
becoming a rugby shape as shown in figure 9.
From figures 8(b-c), one can observe the histories of the drag values are
reasonably well predicted by neural network models and agree with the Open-
FOAM solution in figure 8(a). Despite the small scale model exhibiting no-
ticeable oscillations in the optimisation procedure, the medium and large-scale
neural network models provide smoother predictions, and the drag of both ini-
tial and final shapes agrees well with that from re-sampled data (blue lines) and
the one from OpenFOAM ’s native post-processing code (red symbols).
Figure 9 depicts the converged shapes of all four solvers. For comparison,
the Bézier curve based result is also shown. The ground truth result using
OpenFOAM ends up with a rugby shape which achieves a good agreement with
the data by [25]. The medium and large-scale neural network models collapse
and compare favourably with the ground truth result. In contrast, the small-
scale neural network model’s prediction is slightly off which is not surprising as
one can observe oscillation and offset of the drag history in 8(b) as discussed
before. A possible reason is that the small scale model has less weights so that
the complexity of the flow evolution cannot be fully captured. It is worth noting
that the reduced performance of the Bézier representation in the present work
is partly due to the discretization errors when calculating the normal vectors in
combination with a reduced number of degrees of freedom.
The x-component velocity fields with streamlines for the optimised shapes
are shown in figure 10. The flowfields and the patterns of streamlines in all the
three cases with neural networks show no separation, which is consistent with
the ground truth result in figure 10(a). Considering the final shape obtained
using the three neural network surrogates, the medium- and large-scale models
give satisfactory results that are close to the OpenFOAM result.

4.2 Optimisation experiment at ReD = 40


As the Reynolds number increases past the critical Reynolds number ReD ≈ 47,
the circular cylinder flow configuration loses its symmetry and becomes unstable,
which is known as the Karman vortex street. We consider optimisations for the
flow regime at ReD = 40 which is of particular interest because it exhibits a
steady-state solution, yet is close to the critical Reynolds number. The steady

19
(a) OpenFOAM (b) Small-scale neural network

(c) Medium-scale neural network (d) Large-scale neural network

Figure 10: Streamlines and the x-component velocity fields u/U∞ at ReD = 1.

20
1.8 1.8

1.6 1.6

1.4 Cd = Cd, p + Cd, v 1.4


Cd = Cd p + Cd v
, ,

1.2 1.2

1.0 1.0
Cd

Cd
0.8 Cd, v 0.8
Cd v ,

0.6 0.6

0.4 Cd, p 0.4


Neuralnetwork 1282
Cd p ,

0.2 Re-sampled OpenFOAM 1282 0.2 Re-sampled OpenFOAM 1282


OpenFOAM OpenFOAM
0.0 0.0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200

Iterations Iterations

(a) OpenFOAM (b) Small-scale neural network

1.8 1.8

1.6 1.6

1.4
Cd = Cd p + Cd v
, ,
1.4
Cd = Cd p + Cd v
, ,

1.2 1.2

1.0 1.0
Cd

Cd

0.8
Cd v , 0.8
Cd v ,

0.6 0.6

0.4
Neuralnetwork 1282
Cd p ,
0.4
Neuralnetwork 1282
Cd p ,

0.2 Re-sampled OpenFOAM 1282 0.2 Re-sampled OpenFOAM 1282


OpenFOAM OpenFOAM
0.0 0.0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200

Iterations Iterations

(c) Medium-scale neural network (d) Large-scale neural network

Figure 11: Optimisation histories at ReD = 40. The black solid lines denote
the results using neural network models trained with “Dataset-40” and the blue
solid lines denote the results from OpenFOAM. Results calculated with the re-
sampled flowfields on the 128 × 128 Cartesian grid are denoted by 1282 . The
red cross symbols represent the OpenFOAM ’s results obtained with its native
postprocessing tool.

21
Figure 12: The converged shapes at ReD = 40 with intermediate states pre-
dicted by large-scale NN model at every 10th iteration.

22
(a) OpenFOAM (b) Small-scale neural network

(c) Medium-scale neural network (d) Large-scale neural network

Figure 13: Streamlines and the x-component velocity fields u/U∞ at ReD =
40 obtained with different solvers, i.e. OpenFOAM, and three neural network
models trained with “Dataset-40”.

23
separation bubbles behind the profile further compound the learning task and
the optimisation, making it a good test case for the proposed method.
The ground truth optimisation result using OpenFOAM is shown in figure
11(a). The shape is initialised with a circular cylinder and is optimised to
minimise drag over 200 iterations. As a result, the total drag, processed on the
Cartesian grid, drops from 1.470 to 1.269 (∼ 13.7% reduction). Associated with
the elongation of the shape, the inviscid drag decreases 41.3% while the viscous
drag increases 41.3%. The initial and the final results of the OpenFOAM ’s
native post-processing are shown in red, indicating good agreement. Figures
11(b-d) present the drag histories over 200 optimisation iterations with three
neural network models that are trained with “Dataset-40”. Although larger
oscillations are found in the drag history of the small scale model, the medium
and large scale models predict smoother drag history and compare well with the
ground truth data using OpenFOAM.
The final converged shapes are compared to a reference result [26] in figure
12. The upwind side forms a sharp leading edge while the downwind side of
the profile develops into a blunt trailing edge. Compared to the reference data
[26] and the result using Bézier-curve based method, the use of level-set based
method leads to a slightly flatter trailing edge, probably because more degrees of
freedom for the shape representation are considered in level-set based method.
Further looking at the details of shapes in figure 13, it can be seen that
the more weights the neural network model contains, the closer it compares to
the ground truth result using OpenFOAM. The large scale model which has
the largest weight count is able to resolve the fine feature of the flat trailing
edge as shown in figure 13(d). In contrast, in figure 13(b), the small scale
model does not capture that and even the the surface of the profile exhibits
pronounced roughness. Nonetheless, all the three DNN models predict similar
flow patterns compared to the ground truth result depicted with streamlines,
which are characterised with re-circulation regions downstream of the profiles.
It should be mentioned that the optimised shape at ReD = 40 by Kim and
Kim [25] differs from the one in the present study and the one by Katamine
et al. [26]. In the former [25], the optimised profile converges at an elongated
slender shape with an even smaller drag force. Most likely, this is caused by
that an additional wedge angle constraint is imposed at both leading and trail-
ing edge, which is not adopted in our work and Katamine et al. [26]. As we
focus on deep learning surrogates in the present study, we believe the topic of
including additional constraints will be an interesting avenue for future work.
In the comparison to the ground truth from OpenFOAM, the current results are
deemed to be in very good agreement.

4.3 Shape optimisations for an enlarged solution space


The generalising capabilities of neural networks are a challenging topic [49]. To
evaluate their flexibility in our context, we target shape optimisations in the
continuous range of Reynolds numbers from ReD = 1 to 40, over the course of
which the flow patterns change significantly [36, 50]. Hence, in order to succeed,

24
a neural network not only has to encode change of the solutions w.r.t. immersed
shape but also the changing physics of the different Reynolds numbers. In this
section, we conduct four tests at ReD = 1, 5, 10, and 40 with the ranged
model in order to quantitatively assess the its ability to make accurate flowfield
predictions over the chosen change of Reynolds numbers. The corresponding
OpenFOAM runs are used as ground truth for comparisons.
The optimisation histories for the four cases are plotted in figures 14(a-d).
Despite some oscillations, the predicted drag values as well as the inviscid and
viscous parts agree well with the ground truth values from OpenFOAM. The
total drag force as objective function has been reduced and reaches a stable
state in each case. The performance of the ranged model at ReD = 40 is
reasonably good, although it is slightly outperformed by the specialized NN
model trained with “Dataset-40”. Potentially, the accuracy could be increased
in the region by providing more data at the upper end of the Reynolds number
range, as the ”Dataset-range” only consists of 66 samples with Re∗D ∈ [38, 42],
shown in blue in figure 4.
In line with the previous runs, the overall trend of optimisation for the four
cases shows that the viscous drag increases while the inviscid part decreases as
shown in figures 14(a-d), which is associated with an elongation of the profile
and the formation of the sharp leading edge. The final shapes after optimisation
for four Reynolds numbers are summarised in figure 15. For the four cases, the
leading eventually develops a sharp leading edge, while the trailing edge shows
difference. At ReD = 1 and 5, the profiles converge with sharp trailing edges as
depicted in figures 15(a) and 15(b). The corresponding flowfields also show no
separations in figures 16(a) and 16(b).
As shown in figure 15(c) at ReD = 10 and figure 15(d) at ReD = 40, blunt
trailing edges become the final shapes and the profile at ReD = 10 is more slen-
der than that for ReD = 40. The higher Reynolds number leads to a flattened
trailing edge, associated with the occurrence of the recirculation region shown
in figures 16(c) and 16(d), and the gradient of the objective function becoming
relatively week in these regions. In terms of accuracy, the converged shapes
at ReD = 1, 5, and 10 compare favourably with the results with OpenFOAM.
Compared to ground truth shapes, only the final profile at ReD = 40 predicted
by the ranged model shows slight deviations near trailing edge. Thus, given
the non-trivial changes of flow behavior across the targeted range or Reynolds
numbers, the neural network yields a robust and consistent performance.

4.4 Performance
The performance of trained deep neural network models is one of the central
factors motivating their use. We evaluate our models in a standard workstation
with 12 cores, i.e. Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz, with an NVidia
GeForce RTX 2060 GPU. The optimisation run at ReD = 1 which consists of
200 iterations is chosen for evaluating the run times using different solvers, i.e.
OpenFOAM, DNN models of three sizes trained with “Dataset-1”. Due to the
strongly differing implementations, we compare the different solvers in terms

25
12 5

10

Cd = Cd p + Cd v 4
Cd = Cd p + Cd v
, ,
, ,

Cd v
Cd v
3
,
,
Cd

Cd
6

Cd p ,

2
Neuralnetwork 1282 Cd p ,
1
Neuralnetwork 1282
Re-sampled OpenFOAM 1282 Re-sampled OpenFOAM 1282
OpenFOAM OpenFOAM
0 0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200

Iterations Iterations

(a) ReD = 1 (b) ReD = 5

3.0 1.8

Cd = Cd p + Cd v
, ,
1.6

2.5

1.4
Cd = Cd p + Cd v
, ,

2.0
Cd v ,
1.2

1.0
Cd

Cd

Cd v
1.5

0.8 ,

1.0 0.6

Neuralnetwork 1282
Cd p ,
0.4
Neuralnetwork 1282
Cd p ,
0.5

Re-sampled OpenFOAM 1282 0.2 Re-sampled OpenFOAM 1282


OpenFOAM OpenFOAM
0.0 0.0
0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200

Iterations Iterations

(c) ReD = 10 (d) ReD = 40

Figure 14: Optimisation history for the four cases at ReD = 1, 5, 10, and 40.
The black solid lines denote the results using neural network models (i.e. the
ranged model) and the blue solid lines denote the results from OpenFOAM.
Results calculated with the re-sampled flowfields on the 128 × 128 Cartesian
grid are denoted by 1282 . The red cross symbols represent the OpenFOAM
results obtained with its native postprocessing tool.

Solver Wall time Platform

OpenFOAM 16.3 hr CPU only, 9 cores


Small-scale DNN 97 sec CPU, 1 core & GPU
Medium-scale DNN 106 sec CPU, 1 core & GPU
Large-scale DNN 196 sec CPU, 1 core & GPU

Table 2: Run times for 200 optimisation iterations at ReD = 1.

26
(a) ReD = 1 (b) ReD = 5

(c) ReD = 10 (d) ReD = 40

Figure 15: Shapes after optimisation at ReD = 1, 5, 10, and 40. The black solid
lines denote the results using neural network models (i.e. the ranged model), the
blue dashed lines denote the results from OpenFOAM and the symbols denote
the corresponding reference data.

27
(a) ReD = 1 (b) ReD = 5

(c) ReD = 10 (d) ReD = 40

Figure 16: Streamlines and the x-component velocity fields u/U∞ at ReD = 1,
5, 10, and 40 using the ranged model.

28
of elapsed wall clock time. As listed in table 2, it takes 16.3 hours using 9
cores (or 147 core-hours) for OpenFOAM to complete such a case. Compared
to OpenFOAM, the DNN model using the GPU reduces the computational cost
significantly. The small-scale model requires 97 seconds and even the large-scale
model only takes less than 200 seconds to accomplish the task. Therefore, rela-
tive to OpenFOAM, the speed-up factor is between 600X and 300X. Even when
considering a factor of ca. 10 in terms of GPU advantage due to an improved
on-chip memory bandwidth, these measurements indicate the significant reduc-
tions in terms of runtime that can potentially be achieved by employing trained
neural networks.

5 Concluding remarks
In this paper, deep neural networks are used as surrogate models to carry out
shape optimisation for drag minimisation of the flow past a profile with a given
area subjected to the two-dimensional incompressible fluid at low Reynolds
numbers. Both level-set and Bézier curve representations are adopted to pa-
rameterise the shape, the integral values on the re-sampled Cartesian mesh are
used as the optimisation objective. The gradient flow that drives the evolution
of shape profile is calculated by automatic differentiation in a deep learning
framework, which seamlessly integrates with trained neural network models.
Through optimisation, the drag values predicted by neural network models
agree well with the OpenFOAM results showing consistent trends. Although the
total drag decreases, it is observed that the inviscid drag decreases while the
contribution of viscous part increases, which is associated with the elongation
of the shape. It is demonstrated that the present DNN model is able to predict
satisfactory drag forces and the proposed optimisation framework is promising
to be used for general aerodynamic design. In conjunction with the low run-time
of the trained deep neural network, we believe the proposed method showcases
the possibilities of using deep neural networks as surrogates for optimisation
problems in physical sciences.

Acknowledgements
The authors are grateful to Oguzhan Karakaya and Hao Ma for the valuable
discussions. This work was supported by European Research Council (ERC)
grants 637014 (realFlow), and 838342 (dataFlow).

Declaration of interests
The authors report no conflict of interest.

29
References
[1] D. M. Bushnell and K. J. Moore. Drag reduction in nature. Annu. Rev.
Fluid Mech., 23(1):65–79, 1991. doi: 10.1146/annurev.fl.23.010191.000433.
URL https://doi.org/10.1146/annurev.fl.23.010191.000433.
[2] D. M. Bushnell. Aircraft drag reduction—a review. J. Aerospace Eng., 217
(1):1–18, 2003. doi: 10.1243/095441003763031789.
[3] D. Thévenin and G. Janiga, editors. Optimization and Computational Fluid
Dynamics, chapter 16, 17. Springer-Verlag Berlin Heidelberg, Berlin, Ger-
many, 2008.
[4] S. N. Skinner and H. Zare-Behtash. State-of-the-art in aerodynamic shape
optimisation methods. Appl. Soft Comput., 62:933–962, 2018. ISSN 1568-
4946. doi: https://doi.org/10.1016/j.asoc.2017.09.030. URL http://www.
sciencedirect.com/science/article/pii/S1568494617305690.
[5] A. Jameson. Aerodynamic design via control theory. Journal of Scientific
Computing, 3:233–260, 1988. doi: https://doi.org/10.1007/BF01061285.
[6] M. B. Giles and N. A. Pierce. An introduction to the adjoint approach to
design. Flow, Turbul. Combust., 65:393–415, 2000. doi: https://doi.org/
10.1023/A:1011430410075.
[7] T. D. Economon, F. Palacios, and J. J. Alonso. A Viscous Continuous
Adjoint Approach for the Design of Rotating Engineering Applications. In
21st AIAA Computational Fluid Dynamics Conference. American Institute
of Aeronautics and Astronautics, 2013. doi: 10.2514/6.2013-2580. URL
https://doi.org/10.2514/6.2013-2580.
[8] H. L. Kline, T. D. Economon, and J. J. Alonso. Multi-Objective Optimiza-
tion of a Hypersonic Inlet Using Generalized Outflow Boundary Conditions
in the Continuous Adjoint Method. In 54th AIAA Aerospace Sciences Meet-
ing, 2016. doi: doi:10.2514/6.2016-0912.
[9] B. Y. Zhou, T. Albring, N. R. Gauger, C. R. Ilario da Silva, T. D.
Economon, and J. J. Alonso. An Efficient Unsteady Aerodynamic and
Aeroacoustic Design Framework Using Discrete Adjoint. AIAA Paper
2016-3369, July 2016. URL https://arc.aiaa.org/doi/10.2514/6.
2016-3369.
[10] L. Mueller and T. Verstraete. Adjoint-based multi-point and multi-
objective optimization of a turbocharger radial turbine. Int. J. Turbomach.
Propuls. Power, 2:14–30, 2019. doi: 10.3390/ijtpp4020010.
[11] D. W. Zingg, M. Nemec, and T. H. Pulliam. A comparative evaluation of
genetic and gradient-based algorithms applied to aerodynamic optimiza-
tion. Eur. J. Comput. Mech., 17(1-2):103–126, 2008. doi: 10.3166/remn.
17.103-126. URL https://doi.org/10.3166/remn.17.103-126.

30
[12] N. V. Queipo, R. T. Haftka, W. Shyy, T. Goel, R. Vaidyanathan, and
P. Kevin Tucker. Surrogate-based analysis and optimization. Prog. Aerosp.
Sci., 41(1):1–28, 2005. ISSN 0376-0421. doi: https://doi.org/10.1016/
j.paerosci.2005.02.001. URL http://www.sciencedirect.com/science/
article/pii/S0376042105000102.

[13] G. Sun and S. Wang. A review of the artificial neural network surrogate
modeling in aerodynamic design. J. Aerospace Eng., 233(16):5863–5872,
2019. doi: 10.1177/0954410019864485. URL https://doi.org/10.1177/
0954410019864485.
[14] S. Bhatnagar, Y. Afshar, S. Pan, K. Duraisamy, and S. Kaushik. Pre-
diction of aerodynamic flow fields using convolutional neural networks.
Computational Mechanics, 64(2):525–545, Jun 2019. ISSN 1432-0924.
doi: 10.1007/s00466-019-01740-0. URL http://dx.doi.org/10.1007/
s00466-019-01740-0.
[15] J. Chen, J. Viquerat, and E. Hachem. U-net architectures for fast prediction
of incompressible laminar flows, 2019.
[16] F. de Avila Belbute-Peres, T. Economon, and Z. Kolter. Combining dif-
ferentiable pde solvers and graph neural networks for fluid flow prediction.
In Proceedings of Machine Learning and Systems 2020, pages 11167–11176,
2020.

[17] N. Thuerey, K. Weissenow, L. Prantl, and X. Hu. Deep learning methods


for Reynolds-averaged Navier-Stokes simulations of airfoil flows, 2018.
[18] J. Viquerat and E. Hachem. A supervised neural network for drag predic-
tion of arbitrary 2D shapes in low Reynolds number flows, 2019.

[19] J. Li, M. Zhang, J. R. R. A. Martins, and C. Shu. Efficient aerodynamic


shape optimization with deep-learning-based geometric filtering. AIAA J.,
Articles in Advance:1–17, 2020. doi: 10.2514/1.J059254. URL https:
//doi.org/10.2514/1.J059254.
[20] S. A. Renganathan, R. Maulik, , and J. Ahuja. Enhanced data efficiency
using deep neural networks and gaussian processes for aerodynamic design
optimization, 2020.
[21] O. Pironneau. On optimum profiles in Stokes flow. J. Fluid Mech., 59:
117–128, 1973.
[22] O. Pironneau. On optimum design in fluid mechanics. J. Fluid Mech., 64:
97–110, 1974.
[23] R. Glowinski and O. Pironneau. On the numerical computation of the
minimum-drag profile in laminar flow. J. Fluid Mech., 72:385–389, 1975.

31
[24] R. Glowinski and O. Pironneau. Towards the computation of minimum drag
profiles in viscous laminar flow. Appl. Math. Modelling, 1:58–66, 1976.
[25] D. W. Kim and M.-U. Kim. Minimum drag shape in two-dimensional
viscous flow. Int. J. Numer. Methods Fluids, 21(2):93–111, 2005. doi:
10.1080/10618560410001710469.

[26] E. Katamine, H. Azegami, T. Tsubata, and S. Itoh. Solution to shape


optimisation problems of viscous fields. Int. J. Comut. Fluid Dyn., 19(1):
45–51, 2005. doi: 10.1080/10618560410001710469.
[27] T. Kondoh, T. Matsumori, and A. Kawamoto. Drag minimization and lift
maximization in laminar flows via topology optimization employing simple
objective function expressions based on body force integration. Struct.
Multidiscipl. Optim., 45:693–701, 2012.
[28] R. Yondo, E. Andrés, and E. Valero. A review on design of experi-
ments and surrogate models in aircraft real-time and many-query aero-
dynamic analyses. Prog. Aerosp. Sci., 96:23–61, 2018. ISSN 0376-0421.
doi: https://doi.org/10.1016/j.paerosci.2017.11.003. URL http://www.
sciencedirect.com/science/article/pii/S0376042117300611.
[29] J. Viquerat, J. Rabault, A. Kuhnle, H. Ghraieb, A. Larcher, and
E. Hachem. Direct shape optimization through deep reinforcement learn-
ing, 2019.

[30] A. Paszke, S. Gross, F. Massa, A. Lerer, et al. Pytorch: An imperative style,


high-performance deep learning library. In Advances in neural information
processing systems, pages 8026–8037, 2019.
[31] D. Kraft. Self-consistent gradient flow for shape optimization. Optim. Meth-
ods Softw., 32(4):790–812, 2017. doi: 10.1080/10556788.2016.1171864. URL
https://doi.org/10.1080/10556788.2016.1171864. PMID: 28670104.
[32] S. V. Patankar and D. B. Spalding. A calculation procedure for heat, mass
and momentum transfer in three-dimensional parabolic flows. In Numerical
Prediction of Flow, Heat Transfer, Turbulence and Combustion, pages 54–
73. Elsevier, 1983.
[33] H. K. Versteeg and W. Malalasekera. An Introduction to Computational
Fluid Dynamics, chapter 6. Pearson Education Limited, Essex CM20 2JE
England, 2 edition, 2007.
[34] T. D. Economon, F. Palacios, S. R. Copeland, T. W. Lukaczyk, and J. J.
Alonso. SU2: an open-source suite for multiphysics simulation and design.
AIAA J., 54(3):828–846, 2016. doi: 10.2514/1.J053813. URL https://
doi.org/10.2514/1.J053813.

32
[35] S. C. Dennis and G. Chang. Numerical solutions for steady flow past a
circular cylinder at Reynolds numbers up to 100. J. Fluid Mech., 42:471–
489, 1970.
[36] D. J. Tritton. Experiments on the flow past a circular cylinder at low
Reynolds numbers. J. Fluid Mech., 6(4):547–567, 1959. doi: 10.1017/
S0022112059000829.
[37] S. Osher and J. A. Sethian. Fronts propagating with curvature dependent
speed: algorithms based on Hamilton–Jacobi formulations. J. Comput.
Phys., 79:12–49, 1988.

[38] J. A. Sethian. Computational Methods for Fluid Flow, chapter 16, 17.
Cambridge University Press, Cambridge CB2 2RU, UK, 2 edition, 1999.
[39] J. A. Sethian and P. Smereka. Level set methods for fluid interface. Annu.
Rev. Fluid Mech., 35:341–372, 2003. doi: 10.1146/annurev.fluid.35.101101.
161105.

[40] A. Baeza, C. Castro, F. Palacios, and Zuazua E. 2D Navier-Stokes shape


design using a level set method. AIAA Paper 2008–172, 2008. doi: 10.
2514/6.2008-172.
[41] S. Zahedi and A.-K. Tornberg. Delta function approximations in level set
methods by distance function extension. J. Comp. Phys., 229:2199–2219,
2010. doi: 10.1016/j.jcp.2009.11.030.
[42] J. A. Sethian. Fast marching methods. SIAM Rev., 41:199–235, 1999. doi:
10.1137/S0036144598347059.
[43] L. He, C.-Y. Kao, and S. Osher. Incorporating topological derivatives into
shape derivatives based level set methods. Comput. Fluids, 225:891–909,
2007. doi: 10.1016/j.jcp.2007.01.003.
[44] B. A. Gardner and M. S. Selig. Airfoil design using a genetic algorithm
and an inverse method. AIAA Paper 2003–0043, 2003.
[45] F. Yang, Z. Yue, L. Li, and W. Yang. A new curvature-controlled
stacking-line method for optimization design of compressor cascade con-
sidering surface smoothness. J. Aerospace Eng., 232:459–471, 2018. doi:
10.1177/0954410016679433.
[46] X. Zhang, X. Qiang, J. Teng, and W. Yu. A new curvature-controlled
stacking-line method for optimization design of compressor cascade con-
sidering surface smoothness. J. Aerospace Eng., 234:1061–1074, 2020. doi:
10.1177/0954410019894119.
[47] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional net-
works for biomedical image segmentation. In N. Navab, J. Hornegger,
W. M. Wells, and A. F. Frangi, editors, Medical Image Computing and

33
Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham,
2015. Springer International Publishing. ISBN 978-3-319-24574-4.
[48] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization,
2014.

[49] J. Ling, A. Kurzawski, and J. Templeton. Reynolds averaged turbulence


modelling using deep neural networks with embedded invariance. Journal
of Fluid Mechanics, 807, 2016.
[50] S. Sen, S. Mittal, and G. Biswas. Steady separated flow past a circular
cylinder at low Reynolds numbers. J. Fluid Mech., 620:89–119, 2009. doi:
10.1017/S0022112008004904.

34

You might also like