An Innovative End-To-End PINN-based Solution For Rapidly Simulating Homogeneous Heat Flow Problems An Adaptive Universal Physics-Guided Auto-Solver
An Innovative End-To-End PINN-based Solution For Rapidly Simulating Homogeneous Heat Flow Problems An Adaptive Universal Physics-Guided Auto-Solver
GRAPHICAL ABSTRACT
Keywords: In contemporary heat flow computations, the widespread application of deep learning, specifi-
Homogeneous heat flow problems cally Physical Informed Neural Networks (PINN), has been noted. However, existing PINN meth-
Physical informed neural networks (PINN) ods often exhibit limited applicability to specific operational conditions and are hindered by pro-
Innovative end-to-end PINN solution longed training times, rendering them unsuitable for engineering scenarios requiring frequent
Adaptive universal physics-guided auto-solver
changes in operational parameters. This study addresses the imperative of enhancing the compu-
* Corresponding author.
E-mail address: [email protected] (D. Li).
https://doi.org/10.1016/j.csite.2024.104277
Received 2 January 2024; Received in revised form 3 March 2024; Accepted 17 March 2024
Available online 19 March 2024
2214-157X/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
1. Introduction
Deep learning encompasses multiple domains and application scenarios within engineering computations. For instance, in the
field of image recognition and processing [1,2], deep learning models find extensive applications in areas such as automation engi-
neering, quality control, autonomous driving, and medical imaging. They are employed for tasks like image recognition, object detec-
tion, and image segmentation. In the realm of natural language processing [3,4], deep learning is instrumental in engineering design
and document management, where it is utilized for text summarization, machine translation, automated categorization of engineer-
ing documents, and information extraction. In the domain of prediction and optimization [5–7], deep learning is applied to solve
complex engineering prediction problems, including weather forecasting, stock market prediction, and performance forecasting of
engineering materials. Deep reinforcement learning is also employed to optimize intricate engineering processes. Regarding simula-
tion and modeling [8,9], industries such as manufacturing and aerospace leverage deep learning to enhance simulation and modeling
processes, ensuring more accurate representation of engineering challenges in the real world. This, in turn, reduces development time
and costs. Hence, deep learning enjoys extensive research and application across various industries.
Due to its strong learning capabilities, adaptability, and portability, deep learning has the potential to enhance the simulation and
analysis methods for traditional Computational Fluid Dynamics (CFD) and heat flow problems. It can improve the accuracy and effi-
ciency of traditional methods and offer innovative solutions, such as: (1) Efficient simulation and optimization: Deep learning can be
employed to simulate heat flow problems, particularly in scenarios involving complex boundary conditions and geometric shapes
[10] where traditional numerical methods may become computationally intensive [11]. Deep learning models, such as neural net-
works, enable more efficient simulation of heat flow phenomena, accelerating problem-solving processes. Sichen Li et al. [12] pro-
pose a physically model-free control framework for energy management that consists of a multi-agent deep reinforcement learning
(MADRL) approach to mimic real power flow and heat flow calculations. (2) Data-driven modeling: Deep learning can learn patterns
and relationships from extensive experimental data [13,14], making it possible to construct accurate physical models without the
need for manually formulating complex mathematical equations. This capability is especially valuable in practical engineering appli-
cations, as many complex heat flow problems are challenging to accurately describe using traditional mathematical models. Giuseppe
Pinto et al. [15] propose a fully data-driven control scheme for the energy management which exploits Long Short-Term Memory
(LSTM) Neural Networks, and Deep Reinforcement Learning (DRL). Siyi Li et al. [16] propose an end-to-end deep learning model
which operates directly on unstructured meshes, demonstrating its ability to rapidly make accurate 3D flow field predictions for vari-
ous inlet conditions. The inherent characteristics of deep learning, including its ability to handle complex and nonlinear relationships,
make it a promising approach for improving the accuracy and efficiency of simulations in the fields of CFD and heat flow. It offers a
data-driven alternative to traditional methods, reducing the reliance on manual mathematical modeling and potentially providing
more accurate solutions for complex problems.
As the research into deep learning for computational heat transfer problems deepens, a specialized branch combining neural net-
work and physical constraint methods has been produced: Physics-Informed Neural Network (PINN) [17] or Physically Consistent
Neural Network (PCNN) [18], which is a class of methods used to solve problems based on physical principles. The combination of
neural networks and physical constraints is due to the limitations of traditional numerical methods in complex physical problems
[19–21]. Traditional numerical methods often necessitate spatial and temporal discretization [22,23] and the solving of equations in
linear or nonlinear models. However, in complex physical problems, the equations can be extremely intricate or only available in ap-
proximate forms. When dealing with multi-physics coupling issues [24], complex geometries [25], irregular boundary conditions
[26], uncertain nonlinear properties, or incomplete data [27], traditional numerical methods may be limited in their effectiveness.
PINN merge the flexibility of deep learning models with the constraints of physical equations, making them applicable to a wide
range of irregular and complex physical problems [28–30]. Specifically, PINN typically comprise two components: training and infer-
ence. The training process involves embedding the residuals of physical equations into the neural network's loss function to quantify
2
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
the network's violation of the physics equations. Network parameters are optimized through the backpropagation algorithm [31]. Af-
ter repeated training, the inference process yields predictive results that adhere to the physical constraints. Yaoyao Ma et al. [32] ex-
plored the feasibility of PINN through experiments in solving the problem of electrothermal coupling. The results indicate that PINN
demonstrates good accuracy in solving electrothermal coupling problems. Xiaowei Jin et al. [33] employed PINN to simulate incom-
pressible laminar and turbulent flows by directly encoding the governing equations into deep neural networks through automatic dif-
ferentiation. Yang Liu et al. [34] introduced a novel physics-informed GAN (Generative Adversarial Network) framework that com-
bines PINN with GANs to generate high-resolution flow field images satisfying fluid dynamics equations. This is particularly valuable
for simulating complex fluid phenomena. Shengze Cai et al. [35] presented various prototype heat transfer problems using applica-
tion-specific PINN, especially for scenarios that are challenging to handle with traditional computational methods under real-world
conditions. The results indicate that PINN can not only address ill-posed problems beyond traditional computational methods but also
bridge the gap between computational and experimental heat transfer. Arunabha M. Roy and others [36] proposed an efficient, ro-
bust data-driven deep learning computational framework based on the fundamental principles of PINN for linear, continuous elastic
problems in the context of continuum mechanics. This approach significantly improves computational speed with minimal network
parameters and is highly adaptable across different computing platforms. Sun and colleagues [37] introduced a PINN method for sur-
rogate modeling of fluid flow in situations where no simulation data are available. These advancements in PINN showcase their versa-
tility in solving complex physical problems by leveraging the power of deep learning while maintaining strict adherence to the under-
lying physics.
Despite the numerous advantages demonstrated by PINN, a notable gap persists between existing PINN methodologies and CFD
approaches in terms of solving speed. This limitation hampers its scalability [38], rendering it less applicable to engineering practice.
Specifically, while the inference time for PINN is typically a matter of seconds post-training, these methods are often tailored to spe-
cific scenarios, lacking universality. Consequently, the training time required for PINN must be added to the overall solution time.
Take, for instance, a straightforward scenario involving 2D flow around a circular cylinder. The neural network's training time is fre-
quently more than ten times that of the corresponding CFD modeling and simulation time. Substantial efforts in the existing body of
work have concentrated on enhancing PINN's performance in terms of training efficiency and prediction accuracy [39–41]. For in-
stance, MA Nabian et al. [42] studied an importance sampling approach to enhance the accuracy of PINN training, by sampling the
collocation points in each iteration according to a distribution proportional to the loss function. Furthermore, the results indicate that
providing a piecewise constant approximation of the loss function for importance sampling can improve the efficiency of PINN train-
ing. PH Chiu et al. [43] proposed coupling adjacent support points and their derivative terms obtained by automatic differentiation
(AD) to improve the efficiency and accuracy of PINN training. Additionally, they utilized a dual approach of numerical differentiation
and AD coupling to define the loss function, showing that it is more efficient than AD-based PINN training and yields higher accuracy
compared to ND-based PINN. And strategies such as hyperparameter search to optimize network depth and width have been em-
ployed to reduce relative errors [44]. However, despite strides in addressing these efficiency concerns, the practical challenges in en-
gineering design persist, especially when confronted with the necessity to frequently adjust operating conditions. This poses a signifi-
cant hurdle in realizing the desired improvements in computational efficiency.
Fortunately, engineering often encounters homogenization problems where scenarios share the same underlying physics with
slight variations. The robust generalization ability of neural networks proves advantageous in addressing such challenges, making
PINN highly efficient in handling homogenization problems. Specifically, homogeneous heat flow problems involve situations where
the fundamental nature of heat flow remains constant, but scenario specifics may differ. For instance, consider the symmetric place-
ment of bridge piers along the centerline of a river, as illustrated in Fig. 1. Changes in horizontal positions, the number of piers, or
cross-sectional areas would necessitate re-solving using traditional CFD simulations for each variation. Presently, there is no straight-
forward method to leverage information from similar problems to make predictions for each variation. The rapid solution of homoge-
neous heat flow problems is of significant importance in engineering for two primary reasons. Firstly, it conserves computational re-
sources and time. Traditional CFD methods require solving anew for each unique scenario, consuming substantial computational re-
sources and time. Leveraging existing solutions for similar problems allows for efficient inference and prediction, reducing computa-
tional time and cost. Secondly, in engineering design and optimization, multiple iterations are often required to find the optimal solu-
tion. The swift generation of solutions for similar heat flow problems accelerates the design iteration and optimization process,
thereby enhancing overall engineering efficiency.
As previously introduced, PINN demonstrates notable strengths in addressing homogenization problems. The integration of
physics-guided methods into the training process ensures that the network learns solutions adhering to the fundamental physical laws
governing the system. This approach not only enhances the model's robustness but also facilitates knowledge sharing across different
yet fundamentally similar problems. However, challenges arise in designing a generic homogenization problem solver based on PINN.
The first challenge pertains to network architecture construction. Traditional PINN models typically focus on specific working condi-
tions, making it challenging to incorporate the working condition as a variable into PINN. The second challenge relates to data sam-
pling. Data points corresponding to each working condition exhibit dissimilarities. As a self-supervision method, determining how to
adaptively select data points for PINN presents a secondary challenge. Finally, accommodating a variety of working conditions in-
evitably leads to longer training times. Addressing this challenge requires exploring strategies to enhance training efficiency. In sum-
mary, while PINN's advantages in homogenization problem-solving are evident, the development of a generic solver introduces chal-
lenges related to network architecture, adaptive data point, and training efficiency that necessitate careful consideration.
To address the above challenges, we propose an innovative end-to-end PINN-based approach for solving the homogeneous heat
flow problems: An Adaptive Universal Physics-guided Auto-Solver (AUPgAS). The main contributions of this paper list as follow.
3
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
(1) Comparative analysis: We provide an introduction to both the traditional Computational Fluid Dynamics Finite Volume
Method (FVM) and the PINN method. Subsequently, we conduct a comparative analysis between these two methodologies.
PINN has some advantages over the shortcomings of traditional CFD. PINN's integration of various data is very natural, and
PINN's algorithmic core is relatively simple and easy to update and maintain. However, the actual training process of PINN is
very long and often not as fast as traditional methods.
(2) Framework design of the innovative end-to-end approach: We present the framework of an innovative end-to-end PINN-
based approach, which is based on an Adaptive Universal Physics-guided Auto-Solver. The approach is able to quickly
simulate homogeneous heat flow problems, by introducing the operating condition into the neural network as variables, thus
improving the generality of the model in solving homogeneous problems. At the same time, the AUPgAS improves the
accuracy of training by employing an adaptive sampling method.
(3) Application in two case studies: We demonstrate the application of the proposed method in two distinct case studies,
emphasizing its practical utility and performance in real-world scenarios. The two cases are laminar incompressible flows
with viscous properties, namely flow around a single cylinder and flow around two cylinders. After training, the model
demonstrates the capability to predict pressure and velocity fields of the flow around a single cylinder in different locations
in case Ⅰ, as well as simulate the flow field when the distance between two cylinders rapidly changes in case Ⅱ. Comparative
analysis with traditional computational fluid dynamics methods reveals a significant reduction in solution time with the
AUPgAS. Post-training, AUPgAS achieves an average solution time of 3.4 s per problem, in stark contrast to the average time
of 910 s with conventional methods. This substantial improvement in efficiency is noteworthy. Furthermore, the results
obtained by AUPgAS exhibit remarkable consistency with reference solutions, with an minimum average error of 13.55%.
The organization of this paper is as follows: First, the related work is given in Section1. And the comparative analyses of tradi-
tional computational fluid dynamic and PINN methods are described in Section2. In Section3, the framework of the proposed innova-
tive end-to-end PINN-based approach is described, i.e., the composition of the AUPgAS. In Section4, the application of the AUPgAS to
two case studies is shown. Finally, in Section5, we conclude this paper.
4
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
𝜕u 1
+ u ⋅ ∇u = − ∇p + 𝜐∇2 u + g
𝜕t 𝜌
∇ ⋅ u = 0 (1)
Where the first equation represents the Navier-Stokes equations, while the second equation corresponds to the incompressible flow
assumption. In these equations, υ = μρ , μ represents the dynamic viscosity coefficient, ρ denotes density, t signifies time, p stands for
pressure, u is the velocity vector, ∇ represents the gradient operator, typically indicating the direction of maximum increase in a
scalar field. Usually, g denotes the acceleration due to gravity, ∇ ⋅ u is to calculate the divergence of u, and ∇ 2 is the Laplacian opera-
tor.
In CFD, the finite difference method discretizes the derivative terms in partial differential equations using differencing schemes,
suitable for uniform grid structures. By solving the difference equations, numerical solutions of the flow field can be obtained. It is
commonly used for solving one-dimensional or two-dimensional simple flow problems. The finite element method transforms fluid
mechanics problems into weak forms using the weighted residual method, dividing the solution domain into finite elements in space,
establishing appropriate mathematical models within each element, and converting the problem of the entire solution domain into al-
gebraic equation sets on the elements. It is typically used for solving complex situations such as structural-fluid coupling and nonlin-
ear problems. The fundamental concept of the finite volume method (FVM) is as follows: the computational domain is divided into a
grid, with each grid point surrounded by a non-overlapping control volume. Each grid point is regarded as a representative of a con-
trol volume, and during the discretization process, physical quantities on each control volume are defined and stored at the corre-
sponding grid point. The governing partial differential equation (control equation) is integrated over each individual control volume,
resulting in a set of discrete equations, with the unknown variable φ representing the values of the dependent variable at the grid
points. To perform the integration over the control volumes, it is necessary to assume a pattern for the variation of φ between grid
points, which involves establishing segmented distribution profiles for φ. From the perspective of integration region selection, the fi-
nite volume method falls under the subdomain method of the weighted residual approach. Regarding the approximation of unknown
solutions, the finite volume method employs a local approximation within the discretization process.
Among all discretization methods, the FVM, also known as the control volume method (CVM), is a commonly employed approach
in CFD programs. This is primarily due to its advantages in terms of memory usage and computational speed, especially for large-scale
problems, high Reynolds number turbulent flows, and flows dominated by source.
5
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
The fundamental concept of PINN involves integrating neural networks into the process of solving PDEs to enhance modeling and
solving capabilities for complex physical systems. This is achieved by directly incorporating the physical information of PDEs into the
neural network's loss function. The operational framework of PINN primarily comprises the following steps: (1) Network Structure
Design: Firstly, it is imperative to design a neural network structure, typically encompassing input layers, hidden layers, and output
layers. The input layer generally receives spatial and temporal coordinates, while the output layer is used to yield solutions to PDEs or
other relevant physical quantities. (2) Loss Function Formulation: The loss function stands as the cornerstone of PINN. It consists of
data matching terms and physical constraint terms. Data matching terms ensure the alignment of the neural network's output with
known data points, while physical constraint terms ensure that the network complies with the physical equations of the PDE. (3) Net-
work Training: The neural network is trained by minimizing the loss function. This process often employs gradient descent or its vari-
ants. The training objective is to determine network parameters that allow it to accurately replicate the system's behavior while simul-
taneously adhering to the physical equations. (4) Prediction and Solution: Once the network is trained, it can be used to predict un-
known physical quantities or solve PDEs by providing new input coordinates to the network.
The most prevalent PINN architecture, as depicted in Fig. 2, employs a fully connected neural network to approximate the solution
u(x,t). This approximation is then utilized to construct the residuals LF for the governing equations, as well as the residuals LB for
boundary and initial conditions. Training of the fully connected network parameters is accomplished through gradient descent utiliz-
ing backpropagation based on the loss function.
Table 1
Comparison between CFD and PINN.
CFD PINN
Advantage Convenient for optimization design and analysis Ability to integrate data and knowledge
/ Simple algorithm
/ Easy to maintain and update
Disadvantage Defects of integration of various fidelity data Long training duration
Time-consuming meshing /
Troubles in updating and maintaining software /
6
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
dering it universally applicable as long as the Navier-Stokes equations can describe the problem. Furthermore, it is relatively straight-
forward to manage and maintain.
While PINN exhibit robust data integration and knowledge-assimilation capabilities, along with excellent adaptability and fast in-
ference speed upon training completion, they are subject to the 'no free lunch' theorem. PINN necessitates the solution of a highly
nonlinear, non-convex optimization problem, coupled with high-order automatic differentiation of input variables, rendering the ac-
tual training process notably time-consuming. Often, its speed lags behind that of traditional methods. Consequently, we propose an
Adaptive Universal Physics-guided Auto-Solver (AUPgAS), which, by introducing variable operating condition information within the
input, enables rapid solution of homogeneous heat flow problems post-training completion, thereby enhancing engineering effi-
ciency.
∇ ⋅ u = 0
𝜕u 1
+ u ⋅ ∇u = ∇ ⋅ 𝜎 + g
𝜕t 𝜌
( )
(2)
T
𝜎 = −pI + 𝜇 ∇u + ∇u
where, σ represents the Cauchy stress tensor, p = −tr (σ) /2. μ denotes the dynamic viscosity coefficient, ρ signifies density, t stands for
time, p denotes pressure, u represents the velocity vector, ∇ indicates the gradient, which is the vector in the direction of the maxi-
mum increase of a constant function, g typically signifies gravitational acceleration, ∇ ⋅ u corresponds to the divergence of u and ∇ 2
represents the Laplacian operator.
7
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 3. The structure and use of the Adaptive Universal Physics-guided Auto-Solver.
8
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
(3)
( )
yj = 𝜎 wi,j xi + bj
Where wi,j and bj represent the trainable weights and biases, respectively, and σ( ⋅ ) denotes the activation function that signifies the
nonlinear transformation. All layers, except the final one, consist of a 'linear transformation + activation function' structure. In this
study, the Tanh activation function is employed. The performance of DNN relies on the parameters within the model. In order to bet-
ter fit the training data and make accurate predictions, it is imperative to continuously adjust these parameters. The training process
of DNN involves iteratively fine-tuning model parameters through backpropagation. This optimization technique enables the network
to minimize discrepancies between predicted outputs and actual targets, thereby enhancing its capacity for generalization to unseen
data.
ℒ = 𝜔ℱ ℒℱ + 𝜔ℬ ℒℬ (4)
Where ω represents a customizable weighting coefficient for initial and boundary condition losses, and LF and LB penalize the
residuals of the governing equations, boundary conditions, and initial conditions, respectively. Specifically:
Nℱ
1 ∑ | ( i i )|2
ℒℱ = |r x , t |
Nℱ i=1 | ℱ |
𝑁 𝑁
1 𝐼 1 ℬ
∣∣𝒓𝐼 (𝒙𝑖 , 𝑡𝑖 )∣∣ 2 + ∣∣𝒓ℬ (𝒙𝑖 , 𝑡𝑖 )∣∣ 2 (5)
∑ ∑
ℒℬ =
𝑁𝐼 𝑁ℬ
𝑖=1 𝑖=1
where r(∙) represents the residual, N(∙) denotes the number of collocation points (subscripts F for the governing equations, I for the
initial conditions, and B for the boundary conditions). All these loss terms are functions of the network weights and bias parameters,
wi,j and bj.
In order to compute the residual for LF , it is essential to calculate the derivatives of the output with respect to the input. This
computation can be achieved through automatic differentiation, which can be implemented using deep learning frameworks such as
TensorFlow [52] or PyTorch [53]. Automatic differentiation relies on the chain rule to combine derivatives of individual components
to obtain the overall derivative. This enables us to avoid cumbersome manual derivations or numerical discretization when comput-
ing higher-order derivatives.
Furthermore, if there are some known ground truth data within the computational domain, one may consider incorporating a data
loss term, which signifies the residual between predictions and the data:
N𝒟
1 ∑ | ( i i) |2
ℒdata = |u x , t − ui data | (6)
N𝒟 i=1 | |
9
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
During the training process, optimization algorithms such as Adam [54] and Limited-memory BFGS (L-BFGS) [55] can be em-
ployed to train the DNN due to their favorable convergence rates. In this paper, we employ the Adam optimizer for parameter opti-
mization, and the chosen form of the loss function is the mean squared error function.
4. Case studies
This section applies the AUPgAS proposed in this paper to the estimation of two-dimensional flow around a cylinder, with a focus
on simulating steady-state viscous, incompressible laminar flow as an illustrative example. It provides a detailed description of the
governing equations, data preparation, loss functions, and the training configuration for two specific cases. This serves to illustrate
how to set up the PINN for solving homogeneous heat flow problems. Simulation results demonstrate that the proposed AUPgAS ex-
hibits significant potential in the simulation of flow for solving homogeneous heat flow problems.
4.2. Case study I: 2D steady circular cylinder flow with inconsistent cross section
4.2.1. Scenario and physical model
Firstly, the proposed AUPgAS is applied in the case of a variable cylinder cross-section. The operating condition variables is set as
the horizontal coordinate value of the cylinder's center Sx. This configuration allows simulating and solving steady-state viscous, in-
compressible laminar flow around a cylinder as it moves to any arbitrary position along the axis x using the AUPgAS.
A schematic diagram of the specific application scenario is depicted in Fig. 4. The specific parameters are as follows: The size of
the flow domain is [0,1.1]m×[0,0.4]m, the diameter of the cylinder cross-section is D = 0.1m, the center of the cylinder remains po-
sitioned at a horizontal symmetrical axis y = 0.2m, and the range of variation along the x-axis for the cylinder is set to [0.1,1.0]m.
In the steady-state condition, the dynamic viscosity and density of the fluid are set to 2×10−2kg/(m ⋅ s) and 1kg/m3, respectively.
A parabolic velocity distribution is used at the inlet, and the profile of the normal velocity is defined as follows:
Where Umax equals 1.0m/s, resulting in a low Reynolds number, thereby governing the flow with laminar characteristics. A zero-
pressure condition is imposed at the outlet, and strict no-slip conditions are enforced at the boundaries of the walls and the cylinder
body. Gravity is neglected.
10
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
4.3. Case study II: 2D steady circular cylinder flow with variable spacing double cylinders
4.3.1. Scenario and physical model
Considering the steady-state flow around two cylinders, with the same computational domain as shown in Fig. 4, simulation com-
putations are conducted using the proposed AUPgAS. The condition variable is set as the spacing SG between the two cylinders. As the
distance SG between the two cylinders continuously varies, the steady-state flow around the double cylinders can be rapidly solved us-
ing the AUPgAS proposed in this paper.
The schematic diagram of the specific application scenario is illustrated in Fig. 10. The specific parameters are as follows: the di-
ameter of the two cylindrical cross-sections is D = 0.1m, the centers of the cylinders remain at x = 0.2m, and the variation range of
the spacing between the double cylinders is [0.1,0.2]m.
In a steady-state scenario, the fluid's dynamic viscosity, density, and inlet velocity distribution are consistent with case I, indicat-
ing laminar flow governed by viscous, incompressible characteristics. The outlet enforces a zero-pressure condition, while a strict no-
slip condition is maintained along the walls and the boundaries of the cylinders. Gravitational effects are neglected.
11
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
12
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
real-world data is available, data loss terms can be incorporated into the loss function. In this study, the loss function is devoid of data
loss terms and is expressed as L = LF + 2LB , with the choice of mean squared error function as the form of the loss function.
The data acquisition process is shown in Fig. 11 and the process of preparing training data is outlined as follows: Initially, 40,000
data points are uniformly and randomly sampled across the entire flow domain as shown in Fig. 11(a), with an additional 10,000
points densely sampled within the region [0.1,0.3]m×[0,0.4]m, as shown in Fig. 11(b). To capture flow details effectively during
training, additional data points are randomly generated within the vicinity of the 100 twin-cylinder variation regions, referred to as
[0.15,0.25]m×[0.05,0.35]m, as shown in Fig. 11(c). Consequently, the compiled raw dataset consists of a total of NF = 71284 data
points, inclusive of NdB = 21083 Dirichlet boundary points (associated with the cylinders, walls, and inlet) and NnB = 201 Neumann
13
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 7. Velocity and pressure fields of steady flow through cylinder at different positions.
boundary points (associated with the outlet). Similarly, throughout the training process, it is essential to adjust the dataset dynami-
cally in response to variations in the separation distance between the twin cylinders, denoted as SG. Therefore, the dataset dynami-
cally adapts, reflecting the adaptive nature of the AUPgAS proposed in this paper.
14
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 8. Comparison of results between ANSYS Fluent and AUPGA after 150,000 training epochs in case study I.
15
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 9. Comparison of results between ANSYS Fluent and AUPGA after 300,000 training epochs in case study I.
16
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
stances, values of SG were chosen as 0.12, 0.15, 0.17, and 0.2. We employed the AUPgAS for the computational tasks, with an average
inference prediction time of 3.6 s. Subsequently, the obtained pressure and velocity fields were subjected to visualization, and the re-
sults are depicted in Fig. 13.
4.3.5. Validaion
The efficacy and trainability of the proposed method were substantiated through simulation experiments. The reference solutions
were obtained from the ANSYS Fluent 20.2.0 software package, utilizing the finite volume method. A comparative analysis of pres-
sure and velocity distributions was conducted when incorporating operating condition variable information, employing both AUPgAS
and ANSYS Fluent, as depicted in Figs. 14 and 15. Fig. 14 (a)–(d) present the results and errors between the two methods for SG values
of 0.12, 0.15, 0.17, and 0.2 after 150,000 training iterations, while Fig. 15 (a)–(d) showcase the outcomes and errors for the same SG
values after 300,000 training iterations. It is evident that with an increase in the number of training iterations, the discrepancy be-
tween the results obtained using AUPgAS and the reference solution consistently diminishes. However, in comparison to the results of
Case I, the errors in Case II are somewhat larger. This may be due to improper network selection and hyperparameter configuration.
But the velocity field is still in a relatively good agreement with the reference solution.
17
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
18
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
we can directly predict the pressure and velocity fields for the cylinder positioned at different lateral coordinates. The second case fo-
cuses on flow past two cylinders, with the working condition variable being the gap between the two cylinders. In these two cases, we
demonstrate the effectiveness of the innovative end-to-end approach proposed in this paper. Compared to using traditional computa-
tional fluid dynamics methods to solve the homogeneous heat flow problem, the solution results obtained after training our proposed
network structure exhibit good consistency with the reference solution while reducing the computation time to approximately 1/300
of the traditional methods. Therefore, this approach holds tremendous potential in practical applications.
In future research, we are setting our sights on exploring and developing more sophisticated network architectures with the intent
to significantly boost the efficiency and generalization capabilities of our model. We are particularly interested in graph neural net-
works, as they offer a unique approach to processing data that has a natural graph structure, which is often the case in physical sys-
19
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 13. Velocity and pressure fields of steady flow when the distance between two cylinders changes.
tems. Another area of our research will be the investigation of cross-domain transfer learning strategies. The goal is to enable our
model to leverage knowledge from one physical domain and apply it to another, thereby enhancing the model's versatility and reduc-
ing the dependency on large amounts of domain-specific data. By pursuing these research directions, we anticipate contributing to a
deeper understanding of physical systems and more robust predictive models for a wide array of applications.
20
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 14. Comparison of results between ANSYS Fluent and AUPGA after 150,000 training epochs in case study Ⅱ
21
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Fig. 15. Comparison of results between ANSYS Fluent and AUPgAS after 300,000 training epochs in case study Ⅱ.
22
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
Table 2
Parameter settings involved in AUPgAS.
Parameters Settings
Optimizer Adam
Decay coefficient of first-order moments β1 0.9
Decay coefficient of second-order moments β2 0.999
Learning Rate 0.0005
Learning rate decay factor 0.1
Epochs 300000
No. of Hidden Layers 10
Hidden Layer Size 40/80
Activation Function Tanh
Weight Initialization Xavier
Input Feature Dimensions 3
Output Feature Dimensions 3
Table 3
Comparison between CFD and PINN.
Data availability
No data was used for the research described in the article.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (No. 62203350), in part by Industrial Field Project -
Key Industrial Innovation Chain (Group) of Shaanxi Province (No.:2022ZDLGY06-02).
Nomenclature
u velocity vector, m/s
t time, s
p pressure, Pa
μ the dynamic viscosity coefficient, kg/(m·s)
23
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
ρ density, kg/m3
∇ gradient operator
g acceleration of gravity, m/s2
∇2 the Laplacian operator
φ the values of the dependent variable at the grid points
u(x,t) the solution of partial differential equations
LF the residuals of the governing equations
LB the residuals of boundary and initial conditions
Ldata the data loss term
L the loss function
σ the Cauchy stress tensor
u the solution of the Navier-Stokes equations
ψ the stream function
x the horizontal coordinate (in geometry), m
y the vertical coordinate (in geometry), m
wi,j the trainable weights of the network
bj the biases of the network
σ( ⋅ ) the activation function of the network
ωF the coefficient for governing equation losses
ωB the coefficient for initial and boundary condition losses
r(⋅) the residual
N(⋅) the number of collocation points
F the governing equations
I the initial conditions
B the boundary conditions
Sx the horizontal coordinate of the cylinder's center, m
D the diameter of the cylinder, m
u(0,y) the inlet velocity normal distribution, m/s
H the flow field width, m
Umax the maximum inlet flow velocity, m/s
SG the separation distance between the two cylinders, m
References
[1] J. Chen, H. Shu, X. Tang, et al., Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-
varying environment, Energy 239 (2022) 122123.
[2] Z. Zhong, J. Li, Z. Luo, et al., Spectral–spatial residual network for hyperspectral image classification: a 3-D deep learning framework, IEEE Trans. Geosci. Rem.
Sens. 56 (2) (2017) 847–858.
[3] A. Torfi, R.A. Shirvani, Y. Keneshloo, et al., Natural language processing advancements by deep learning: a survey, arXiv preprint arXiv 2003 (2020) 01200.
[4] D. Wang, J. Su, H. Yu, Feature extraction and analysis of natural language processing for deep learning English language, IEEE Access 8 (2020) 46335–46345.
[5] M. Neshat, M.M. Nezhad, S. Mirjalili, et al., Short-term solar radiation forecasting using hybrid deep residual learning and gated LSTM recurrent network with
differential covariance matrix adaptation evolution strategy, Energy 278 (2023) 127701.
[6] W. Jiang, P. Lin, Y. Liang, et al., A novel hybrid deep learning model for multi-step wind speed forecasting considering pairwise dependencies among multiple
atmospheric variables, Energy (2023) 129408.
[7] D. Kang, D. Kang, S. Hwangbo, et al., Optimal planning of hybrid energy storage systems using curtailed renewable energy through deep reinforcement learning,
Energy 284 (2023) 128623.
[8] K. Yeo, I. Melnyk, Deep learning algorithm for data-driven simulation of noisy dynamical system, J. Comput. Phys. 376 (2019) 1212–1231.
[9] Y. Lu, B. Wang, Y. Zhao, et al., Physics-informed surrogate modeling for hydro-fracture geometry prediction based on deep learning, Energy 253 (2022) 124139.
[10] J.Z. Peng, X. Liu, Z.D. Xia, et al., Data-driven modeling of geometry-adaptive steady heat convection based on convolutional neural networks, Fluids 6 (12)
(2021) 436.
[11] E.M. Sparrow, A. Haji-Sheikh, Flow and Heat Transfer in Ducts of Arbitrary Shape with Arbitrary Thermal Boundary Conditions, 1966.
[12] S. Li, W. Hu, D. Cao, et al., Physics-model-free heat-electricity energy management of multiple microgrids based on surrogate model-enabled multi-agent deep
reinforcement learning, Appl. Energy 346 (2023) 121359.
[13] C. Janiesch, P. Zschech, K. Heinrich, Machine learning and deep learning, Electron. Mark. 31 (3) (2021) 685–695.
[14] J. Leng, P. Jiang, A deep learning approach for relationship extraction from interaction context in social manufacturing paradigm, Knowl. Base Syst. 100 (2016)
188–199.
[15] G. Pinto, D. Deltetto, A. Capozzoli, Data-driven district energy management with surrogate models and deep reinforcement learning, Appl. Energy 304 (2021)
117642.
[16] S. Li, M. Zhang, M.D. Piggott, End-to-end wind turbine wake modelling with deep graph representation learning, Appl. Energy 339 (2023) 120928.
[17] S. Cai, Z. Mao, Z. Wang, et al., Physics-informed neural networks (PINNs) for fluid mechanics: a review, Acta Mech. Sin. 37 (12) (2021) 1727–1738.
[18] L. Di Natale, B. Svetozarevic, P. Heer, et al., Physically consistent neural networks for building thermal modeling: theory and analysis, Appl. Energy 325 (2022)
119806.
[19] L. Ge, F. Sotiropoulos, A numerical method for solving the 3D unsteady incompressible Navier–Stokes equations in curvilinear domains with complex immersed
boundaries, J. Comput. Phys. 225 (2) (2007) 1782–1809.
[20] S.C. Chapra, Numerical Methods for Engineers, Mcgraw-hill, 2010.
[21] H. Gao, L. Sun, J.X. Wang, PhyGeoNet: physics-informed geometry-adaptive convolutional neural networks for solving parameterized steady-state PDEs on
irregular domain, J. Comput. Phys. 428 (2021) 110079.
[22] T.E. Tezduyar, D.K. Ganjoo, Petrov-Galerkin formulations with weighting functions dependent upon spatial and temporal discretization: applications to
transient convection-diffusion problems, Comput. Methods Appl. Mech. Eng. 59 (1) (1986) 49–71.
[23] Z. Ming, C.A.O. Yihua, Numerical simulation of rotor flow field based on overset grids and several spatial and temporal discretization schemes, Chin. J.
24
Y. Zhao et al. Case Studies in Thermal Engineering 56 (2024) 104277
25