Mesh Generation in CFD PDF
Mesh Generation in CFD PDF
Mesh Generation in
CFD
Edited and Adapted by :
Ideen Sadrehaghighi, Ph.D.
Cyliner Head -
(Polyhedral cells) Mixer (SAMM cells)
ANNAPOLIS, MD
2
Figure I Painting by Lu Xinjian - Constellation / Virgo, Dia. 150cm, Acrylic on Canvas, 2015
Figure II Artistic Rendering by Lu Xinjian - City DNA /London 200 x 400, Acrylic on Canvas, 2013
3
Figure III Using Simulation to Understand Avian Nest Design Strategies +FE Mesh Generation
Module of ScanIP to be Used for Conversion of the Segmented 3D Image Data into High Quality
Volumetric Meshes, Courtesy of Simpleware™ Product Group at Synopsys
4
Contents
1 Introduction ................................................................................................................................ 29
1.1 The Black Box Dilemma .................................................................................................................................... 29
1.1.1 Trust the Mesh Generated by the Software, or Take a Proactive Approach? .................. 29
1.1.2 Not All the Meshes Created Equal ..................................................................................... 29
1.1.3 The Mesh Types ................................................................................................................. 30
1.1.4 Regional Meshing .............................................................................................................. 31
1.1.5 Simulation Cost .................................................................................................................. 31
1.1.6 Physics vs. Mesh ................................................................................................................ 32
1.1.7 Meshing Generalities ......................................................................................................... 32
1.2 Classification of Mesh Generation Techniques ....................................................................................... 33
1.2.1 Field (Domain) Discretization Process (Mesh Generation)................................................ 35
9.2.4.3 Prism Grid Generation Method Based on Anisotropic Agglomeration Approach ..... 209
9.2.4.3.1 Volume Agglomeration .................................................................................... 209
9.2.4.3.2 Interface Agglomeration .................................................................................. 210
9.2.4.4 Multigrid/Parallel Algorithm....................................................................................... 210
9.2.4.5 Multi-Level Coarser Grid Generation Based on Anisotropic Agglomeration Approach
210
9.2.4.6 Applications and Discussions ...................................................................................... 211
9.2.4.6.1 Subsonic Turbulence Flow Over 2D 30P30N Airfoil ......................................... 212
9.2.4.6.2 Transonic Turbulence Flow over ONERA M6 Wing .......................................... 212
9.2.4.6.3 Transonic Turbulence Flow over DLR-F6 Wing-Body Configuration ................ 213
9.2.4.7 Concluding Remarks ................................................................................................... 215
List of Tables
Table 3.1 NASA CRM Free-Stream Conditions ................................................................................................ 57
Table 7.1 Nomenclature ........................................................................................................................................ 136
Table 7.2 Comparison of the number of vertices and quality of the mesh for different values of δ
- (Courtesy of [Labbe]) ............................................................................................................................................... 144
16
List of Figures
Figure 1.1 Meshes Created using ANSYS Mosaic-Enabled Poly-Hex Core Meshing - Courtesy of
Sheffield Hallam University ......................................................................................................................................... 30
Figure 1.2 Methodology of General Grid Generation .................................................................................... 32
Figure 1.3 Examples of Structured Meshes for Turbine Blade ................................................................. 34
Figure 1.4 Classification of Grid Generation Algorithms (Courtesy of Steven Owen) .................... 34
Figure 1.5 Example of Unstructured Tetrahedral Meshes ......................................................................... 35
Figure 2.1 Anatomy of commercial CAD Systems .......................................................................................... 37
Figure 2.2 Fighter Airplane F-16 calculation ................................................................................................... 37
Figure 2.3 Example of a CSG Tree ......................................................................................................................... 40
Figure 2.4 Geometry Import and Preparation................................................................................................. 41
Figure 2.5 Different Analysis Require Different Geometric Representations .................................... 42
Figure 2.6 Small Feature (Left) vs Removed (Right) .................................................................................... 43
Figure 3.1 Domain Topology (O-Type, C-Type, and H-Type; from left to right) ............................... 45
Figure 3.2 C-H Type Topology for a Wing Section ......................................................................................... 46
Figure 3.3 Sponge Analogy ...................................................................................................................................... 46
Figure 3.4 Multi Block Representation for C-H Mesh Around a Wing ................................................... 47
Figure 3.5 Dual Block Grid Topology for a Generic Wing-Fuselage Configuration .......................... 48
Figure 3.6 Topology and Grid on a Multi-Block Wings via GridPro® ..................................................... 48
Figure 3.7 Multi-Block Gridding of Turbine Blade - (Courtesy of GridPro) ....................................... 49
17
Figure 5.12 Schematic image of Adaptive Mesh Refinement – (Courtesy of Hiroshi Abe) ........... 98
Figure 5.13 Pressure Contours in 2D Backward Step .................................................................................. 99
Figure 5.14 Example Chimera Grid Near Curved Surface (Courtesy of NASA Ames) ................. 101
Figure 5.15 Example Hybrid Grid Near Curved Surface – (Courtesy of NASA Ames).................. 103
Figure 5.16 Basic Superposition Example – (Courtesy of Kalinin, Mazo and Isaev) .................... 104
Figure 5.17 Example of Cartesian Grid on a Generic Airplane – (Source: Richard Smith 1996)
.............................................................................................................................................................................................. 105
Figure 5.18 Converging of an Octree Decomposition Around an Airfoil ........................................... 106
Figure 5.19 A close-up view of Nasty Cheese a well-known test-case featuring 30◦ Dihedral
angles – (Courtesy’s of [Mar´echal]) ..................................................................................................................... 107
Figure 5.20 Overset Mesh Combination.......................................................................................................... 107
Figure 5.21 Two Counter-Rotating Objects Embedded in Two Overset Regions with
Background Mesh – (Courtesy of Siemens) ....................................................................................................... 108
Figure 6.1 Closing stage of a Moving Front Method ................................................................................... 109
Figure 6.2 Mesh Parameters ................................................................................................................................ 110
Figure 6.3 Surface Mesh of SGI Logo ................................................................................................................ 111
Figure 6.4 States of a Front Edge – (Courtesy of Owen et al.)................................................................ 112
Figure 6.5 Steps demonstrating process of generating a quadrilateral from Front NA-NB -
(Courtesy of Owen et al.) ........................................................................................................................................... 113
Figure 6.6 Progression of Q-Morph- (Courtesy of Owen et al.) ............................................................. 114
Figure 6.7 Results of Q-Morph Compared with Lee’s (1994) Advancing Front Indirect............ 115
Figure 6.8 Large Transition Mesh for CFD Application - (Courtesy of Owen et al.) ..................... 116
Figure 6.9 Comparison of Q-Morph with Lee’s Algorithm Illustrating Element Boundary....... 116
Figure 6.10 Success and failure of the in sphere test of abcd with e. ................................................. 118
Figure 6.11 Robust and Fast way to Detect if point D lies in the Circumcircle of A, B, C ............ 119
Figure 6.12 Two-Three Tetrahedral swap ..................................................................................................... 119
Figure 6.13 Delaunay Triangulation (white) and Voronoi Diagram (blue) – Courtesy of
[Labbe]) ............................................................................................................................................................................ 120
Figure 6.14 2D Delaunay Triangulation of a Set of Vertices (Black) Restricted to a Curve (Blue)
.............................................................................................................................................................................................. 122
Figure 6.15 Meshing Types in SAMM ............................................................................................................... 123
Figure 6.16 Mesh Inside a Pyramid .................................................................................................................. 123
Figure 6.17 Typical Polyhedral Cell and their Decomposition .............................................................. 124
Figure 6.18 Dual Surface Triangulation Resulting in Polyhedron ....................................................... 125
Figure 6.19 (a) Cut of initial tetrahedral mesh of a simple 2-material model (b) Cut of initial
polyhedral mesh showing valid (green) and invalid (red) elements (c) Cut of untangled and
optimized polyhedral mesh (d) Full polyhedral mesh .................................................................................. 126
Figure 6.20 Boundary Layer Prisms Generated on a Cascade of a 2D Triangulation and Dual
Polyhedron ...................................................................................................................................................................... 128
Figure 6.21 3D Concept of Cascading for Boundary Layer ..................................................................... 129
Figure 6.22 Comparison of traditional voxel mesh with Simpleware mesh preserving
segmented domains without decreasing data resolution............................................................................ 131
Figure 6.23 Mechanical Characterization of Nest Material ..................................................................... 132
Figure 7.1 Representation of a 3D Metric with Eigenvalues λ1, λ2 and λ3 as an Ellipsoid –
(Courtesy of [Labbe]) .................................................................................................................................................. 138
Figure 7.2 An anisotropic Uniform Delaunay Triangulation (orange) and the Corresponding
Stretched .......................................................................................................................................................................... 140
Figure 7.3 Two stars Sp and Sq forming an inconsistent configuration - (Courtesy of [Labbe])
.............................................................................................................................................................................................. 141
19
Figure 7.4 Influence of the Parameter ψ0 in a 2D (shown on the left) and 3D Domain (shown on
the right) - (Courtesy of [Labbe]) .......................................................................................................................... 142
Figure 7.5 A square of side 10 and centered on the origin, endowed with the “Starred” metric
field ..................................................................................................................................................................................... 145
Figure 7.6 Anisotropic Triangulation of a Rectangle Endowed with the Hyperbolic Shock
Metric Field - (Courtesy of [Labbe]) ..................................................................................................................... 146
Figure 7.7 A square of side 6 and centered on the origin, endowed with the “Swirl” metric field
- (Courtesy of Labbé et al.)...................................................................................................................................... 146
Figure 7.8 The Optimized SRDT of 4000 Seeds in a Planar Domain Endowed with a Hyperbolic
Shock Induced Metric fFeld (left). On the right, a zoom on a rotational region of the metric field
shows the Difference between pre- (above) and post- (bottom) Optimization – (Courtesy of
Labbé et al.) .................................................................................................................................................................... 148
Figure 7.9 Isotropic and Anisotropic Canvas Sampling - (Courtesy of [Labbe])............................ 149
Figure 7.10 Unit Sphere Endowed with the Hyperbolic Metric field - (Courtesy of [Labbe]) .. 149
Figure 7.11 The discrete Riemannian Voronoi Diagram of 1020 seeds on the “Chair” surface,
with a curvature induced metric field; the edges of the curved Riemannian Delaunay
triangulation are traced in black - (Courtesy of [Labbe]) ............................................................................ 150
Figure 7.12 Discrete Riemannian Voronoi Diagram (top) and Curved Riemannian Delaunay
Triangulation (bottom) Endowed with the Hyperbolic Shock Metric Field - (Courtesy of [Labbe])
.............................................................................................................................................................................................. 151
Figure 8.1 Hybrid Grid and Steady State Solution ...................................................................................... 153
Figure 8.2 Local Remeshing ................................................................................................................................. 154
Figure 8.3 Hybrid Mesh on a Wing-Body-Pylon-Nacelle Configuration – Courtesy of Centrum®
.............................................................................................................................................................................................. 156
Figure 8.4 Comparison of Different Mesh Types for RANS Computations ....................................... 160
Figure 8.5 The Optional Target Side (colored white) is used to Define the Extrusion Distance
.............................................................................................................................................................................................. 161
Figure 8.6 The bottom-up revolve method sweeps the source side mesh by the specified angle
about the axis ................................................................................................................................................................. 162
Figure 8.7 Constructions of Authomated Hybrid Mesh ............................................................................ 162
Figure 8.8 Predominantly Polyhedral Meshing with Extrusion Layers ............................................. 163
Figure 8.9 Meshes Generated by a) Proposed Algorithm and b) Leading Commercial Vendor
.............................................................................................................................................................................................. 164
Figure 8.10 Adaptive Mesh Refinement Types ............................................................................................ 165
Figure 8.11 An H-refinement mesh about a Shuttle-like body (left) and Computed CP (right) 166
Figure 8.12 Isotropic vs. Anisotropic Meshing............................................................................................. 167
Figure 8.13 Coarsening by Edge Collapsing – Courtesy of [Cavallo]................................................... 168
Figure 8.14 Hierarchy of Successively Coarser Meshes Obtained by Uniform ............................... 169
Figure 8.15 Coarsening ratio for coarsening with and without local Re-triangulation. ............. 170
Figure 8.16 3 to 2 and 2 to 3 Swap ................................................................................................................... 171
Figure 8.17 Comparison of Coarse, Medium and Fine Grids: lateral view on fore-body with
Symmetry......................................................................................................................................................................... 174
Figure 8.18 Local Dissipation Error of Drag Coefficient on field cut-plane at x =1400 inch;
isometric/downstream view ................................................................................................................................... 175
Figure 8.19 Comparison of SolarChimera5 and Solar Grid at x = 1454 inch plane; Viscous Wall
Surface in Dark .............................................................................................................................................................. 176
Figure 8.20 HiLiftPW-3 Experience .................................................................................................................. 178
Figure 9.1 JSM Model with Engine Nacelle .................................................................................................... 179
Figure 9.2 Computational Domain of the HL-CRM Gapped Flaps Model .......................................... 180
20
Figure 9.3 Computational Domain and Separation of Zones of the JSM Model with Engine
Nacelle ............................................................................................................................................................................... 181
Figure 9.4 Three Locations of Problematic Areas of the JSM Geometry for the Generation of
Boundary Layers ........................................................................................................................................................... 181
Figure 9.5 Batch Mesh Setup for the JSM Model with Size Boxes for Local Mesh Control ......... 182
Figure 9.6 Resulting Layers for Isotropic Surface Mesh (Top) and Anisotropic (Bottom)........ 184
Figure 9.7 Close ups of Coarse CRM Gapped Flap Model with Comparison of Tridiagonal
Dominant (Top) vs. Quad Dominant (Bottom) Surface Mesh .................................................................... 185
Figure 9.8 Volume Mesh of the JSM .................................................................................................................. 187
Figure 9.9 CL and CD for CRM Geometry at 8 degree AoA using OpenFOAM and STAR-CCM+ . 188
Figure 9.10 Lift and Drag Coefficients for the JSM Geometry using OpenFOAM and STAR-CCM+
.............................................................................................................................................................................................. 189
Figure 9.11 Multi gridding Cycles ...................................................................................................................... 190
Figure 9.12 Sequence of gridding in unstructured multigrid scheme .............................................. 191
Figure 9.13 Illustration of a node-centered median-dual control volume ....................................... 193
Figure 9.14 Trailing-edge area of a 3D wing agglomerated by the hierarchical scheme. Primal
grid is shown by thin lines; agglomerated grid is shown by thick lines. ............................................... 194
Figure 9.15 Typical implicit line-agglomeration showing a curved solid body surface on the left
and a symmetry plane on the right. The projection of the line-agglomerations can be seen on the
symmetry plane............................................................................................................................................................. 195
Figure 9.16 Grids and convergence of the model diffusion equation for the F6 wing-body
combination .................................................................................................................................................................... 196
Figure 9.17 Grids and Convergence of the Model Diffusion Equation for the DPW-W2 case .. 197
Figure 9.18 Grids and Convergence for the wing-ap inviscid case. ..................................................... 198
Figure 9.19 Residual versus CPU time for the F6 wing-body case (RANS) ...................................... 204
Figure 9.20 Interface Agglomeration Procedure Wing – Courtesy of [Laiping et al.] ................. 209
Figure 9.21 Initial Hybrid Grids and Coarsen Grids Wing – Courtesy of [Laiping et al.] ........... 210
Figure 9.22 Initial Hybrid Grids and Coarsening Grids over 30P30N Airfoil Wing – Courtesy of
[Laiping et al.] ................................................................................................................................................................ 212
Figure 9.23 CP Distribution on Solid Wall Wing Courtesy of [Laiping et al.].................................. 212
Figure 9.24 Initial Hybrid Grids and Coarsening Grids over ONERA M6 Wing – Courtesy of
[Laiping et al.] ................................................................................................................................................................ 213
Figure 9.25 Close-up Views of Hybrid Grids After Agglomeration Wing – Courtesy of [Laiping
et al.]................................................................................................................................................................................... 214
Figure 9.26 Aerodynamic Force Coefficients for Different Angles of Attack (M∞ = 0.75) Wing –
Courtesy of [Laiping et al.] ....................................................................................................................................... 215
Figure 9.27 Hybrid Grids over DLR-F6-WBNP Configuration Wing – Courtesy of [Laiping et al.]
.............................................................................................................................................................................................. 216
Figure 9.28 CP Distributions at Three Typical Sections (M = 0.75, α = 1.0 deg) Wing – Courtesy
of [Laiping et al.]........................................................................................................................................................... 217
Figure 10.1 Example Adaptive Grid for Supersonic Wedge Flow ........................................................ 220
Figure 10.2 Schematic image of Adaptive Mesh Refinement ................................................................. 221
Figure 10.3 Octree Data Structure of Adaptive Cartesian Grid Method ............................................ 221
Figure 10.4 Schematic 2D view of angular variation of normal............................................................ 222
Figure 10.5 Pressure Contours in 2D Backward Step ............................................................................... 222
Figure 10.6 Selected Initial Meshes for the Transient Adaptive Procedure (Meshes 3, 20, 27
and 29) .............................................................................................................................................................................. 223
Figure 10.7 30P30N Multi-Element Airfoil & close up of slat ................................................................ 225
Figure 10.8 2D Fuel Cell Slice & Zoomed ........................................................................................................ 226
Figure 10.9 Grid Adaption using Supersonic Flow for an Airfoil (bow shock) ............................... 226
21
Figure 10.10 NACA 0012 Transonic test case: M∞ = 0.8, α=1.25 .......................................................... 226
Figure 10.11 Two-Pass Approach for Parallel Coarsening and Refinement. ........................................ 228
Figure 10.12 Store position, orientation, and surface pressures at selected points in trajectory...... 228
Figure 10.13 Adapted Mesh Partitioning During Store Dispense ............................................................. 228
Figure 10.14 Inter-Processor Partitioning Based on Laplace Coefficients....................................... 229
Figure 10.15 Hybrid Icosahedra Surface Mesh (left) and Multi-Material Hybrid Volume Mesh
(right) – (Courtesy of Khamayseh and Almeida)............................................................................................. 230
Figure 10.16 HTTR Multi-Material Geometry, Initial Coarse Mesh (left), Refined Mesh (right) )
– (Courtesy of Khamayseh and Almeida) ........................................................................................................... 231
Figure 10.17 Orography field (left), r-adaptively (center) and h-adaptively (right) for climate
modeling ) – (Courtesy of Khamayseh and Almeida) .................................................................................... 232
Figure 10.18 Coupled orography field transfer with h-adaptively. Planar orography field (top),
.............................................................................................................................................................................................. 233
Figure 10.19 Meshing and Partitioning of Centrifugal Contactor ) – (Courtesy of Khamayseh
and Almeida) .................................................................................................................................................................. 234
Figure 10.20 Discretization Error in the Drag Coefficient for Transonic Flow over an Airfoil
(Reproduced from Dwight) ...................................................................................................................................... 236
Figure 10.21 Steady-State Burgers Equation for Reynolds Number 32 ............................................ 239
Figure 10.22 Adaption Schemes Applied to Burgers Equation Left) numerical solutions and
right) local nodal spacing Δx. ................................................................................................................................... 239
Figure 10.23 Extraction of a Flow Feature & Redistributed Volume Meshes ................................... 243
Figure 10.24 Hybrid Meshes for the NACA0012 Wing-Section and Cp Distribution (-1.0 to 1.0)
.............................................................................................................................................................................................. 244
Figure 10.25 Adaptive Re-meshing of Capsule ............................................................................................ 245
Figure 11.1 Mesh Deformation Problem ........................................................................................................ 248
Figure 11.2 Cylinder Motion in 2D .................................................................................................................... 251
Figure 11.3 Mesh Deformation via Bi-Harmonic Equations................................................................... 252
Figure 11.4 Mesh Deformation via Laplace & RBF Methods .................................................................. 253
Figure 11.5 Outlet Guide Vane (OGV) Boundary Surfaces Defined for a Single Passage ............ 254
Figure 11.6 Typical Angular Variation Between the Computed Distance Field Vector and the
Surface Normal (OGV not shown to scale) ......................................................................................................... 255
Figure 11.7 GGI interface ...................................................................................................................................... 256
Figure 11.8 Overset Method ................................................................................................................................ 257
Figure 11.9 Delaunay Method of Dynamic mesh......................................................................................... 258
Figure 11.10 Mesh Before and After the Translational Deformations ............................................... 259
Figure 11.11 Mesh Before and After the x-axis Rotational Deformation .......................................... 260
Figure 11.12 Spiral Inductor Geometry where P1 and P2 denote port 1 and Part 2 ................... 262
Figure 11.13 Vertical field evolution and associated mesh refinement in the microstrip spiral
inductor, simulated by a two-level dynamic AMR-FDTD ............................................................................. 262
Figure 11.1 Backward facing step in a duct using Polyhedral, Hexahedral and Tetrahedral cells
.............................................................................................................................................................................................. 264
Figure 12.2 Effect of truncation error on Hex and Tet cells ................................................................... 265
Figure 12.3 Average Bees Being Smarter than CFD Engineer? (Courtesy of Stephen Ferguson)
.............................................................................................................................................................................................. 265
Figure 12.4 Polyhedral cells vs Tetrahedral cells ....................................................................................... 266
Figure 12.5 Boundary prims cells for tetrahedral (left) and polyhedral (right) cells – (Courtesy
of CD-Adapco) ................................................................................................................................................................ 267
Figure 12.6 GG simple face averaging .............................................................................................................. 269
Figure 12.7 GG Inverse Distance Weighted (IDW) Face Interpolation .............................................. 270
Figure 12.8 Methodologies for various Gradient Order of Accuracy .................................................. 271
22
Figure 12.9 Global Error Norms for x-Direction Gradient for Various Gradient Methods ........ 272
Figure 12.10 Notional Launch Vehicle was Imported from an IGES file – (Courtesy of Pointwise
Inc.) ..................................................................................................................................................................................... 274
Figure 12.11 Broken Rules are Dsplayed on-screen as a Guide to Repair – (Courtesy of
Pointwise Inc.) ............................................................................................................................................................... 274
Figure 12.12 T-Rex Mesh Generates Near-Wall Hex Layers for Boundary Layer Resolution and
Transitions to an Isotropic Tetrahedral Mesh in the Far Field – (Courtesy of Pointwise Inc.) .... 276
Figure 12.13 Visualization of Boundary Conforming Anisotropic Elements................................... 277
Figure 12.14 Linear vs Full Curving (Non-Linear) Mesh (Courtesy of Karman) ........................... 278
Figure 12.15 Visualization of Three Non-Linear Interpolating Polynomial .................................... 279
Figure 12.16 Distribution of points in a Fourth-Order Triangle and the Six Spring System
Linking the Free Nodes - Fekete Distribution................................................................................................... 282
Figure 12.17 The High-Order Surface Created Using the Affine Mapping with / without
Optimization in a Region of High Distortion in the CAD Surface (left to right) .................................. 282
Figure 12.18 The Elemental RP1 Road Car.................................................................................................... 286
Figure 12.19 High-Order Surface Meshing for 2 Designs ........................................................................ 287
Figure 12.20 Surface Simulation for Pressure - D1 (left) D2 (right)................................................... 288
Figure 12.21 Underside of the RP1 car surface mesh, design 1 (D1) left, design 2 (D2) right. 289
Figure 12.22 Optimization of An Originally Invalid (blue) 2D Example Mesh ............................... 291
Figure 12.23 Optimization of 10th Quadrilateral Mesh Showing the Initial Configuration and
Optimization using the Hyper Elastic and Distortion Functional............................................................. 292
Figure 12.24 Shows the Displacement Residual and Quality, Q, of the Cube Sphere Mesh....... 293
Figure 12.25 Optimization of 4th Order Sphere mesh from the Initial Configuration.................. 293
Figure 12.26 Optimization of the DLR F6 Geometry ................................................................................. 294
Figure 12.27 Cross Section of a Semi-Sphere Case Highlighting the Sliding of CAD .................... 295
Figure 12.28 Hybrid prismatic-tetrahedral mesh of the Boeing reduced landing gear
configuration before (a) and after (b) optimization, and after the isoperimetric splitting is
applied (c). Note that the color of the surface triangles is not related to mesh quality .................. 296
Figure 12.29 Element Quality Histograms of the Boeing Reduced Landing Gear Configuration
for Initial and various Optimization Settings .................................................................................................... 297
Figure 12.30 High-Order Mesh of the NACA Wing ..................................................................................... 298
Figure 12.31 Enlargements of Regions of the NACA Wing Mesh.......................................................... 299
Figure 13.1 Comparison of Hex (16 K Cells) and Tet (440 K Cells) for a Pipe with 90 Degree
Bend ................................................................................................................................................................................... 300
Figure 13.2 Results of Hex vs Tet Meshes as well as Hybrid Mesh in a Pipe with 90 Degree
Bend ................................................................................................................................................................................... 301
Figure 13.3 Design of Propellers, (left) Propeller P5168, ....................................................................... 303
Figure 13.4 Computational Domain– (Courtesy of Morgut & Nobile)................................................ 303
Figure 13.5 Meshing for Propeller P5168– (Courtesy of Morgut & Nobile) .................................... 305
Figure 13.6 KT , KQ and η Curves of Propeller A – (Courtesy of Morgut & Nobile) ........................ 306
Figure 13.7 KT and KQ curves of Propeller P5168 – (Courtesy of Morgut & Nobile) .................... 307
Figure 13.8 Flow Around Turbine Blade – (Courtsy of Sasaki et al.) .................................................. 309
Figure 13.9 Geometric Blocking Used (a) Structured Hexahedral (178 Blocks) and (b)
Unstructured Hexahedral (80 Blocks) – (Courtesy of Samir Vinchurkar & Worth Longest)........ 311
Figure 13.10 Four Meshing Styles of the PRB Model (a) Structured Hexahedral, (b)
Unstructured Hexahedral, (c) Prismatic, and (d) Hybrid – (Courtesy of Samir Vinchurkar &
Worth Longest) ............................................................................................................................................................. 312
Figure 13.11 Velocity Vectors (a) Structured Hexahedral Mesh with 214 K C.V. (b)
Unstructured Hexahedral Mesh with 318 K, C. V. (c) Prismatic Mesh with 510K C. V, (d) Hybrid
Mesh with 608 K C. V. – (Courtesy of Samir Vinchurkar & Worth Longest) ........................................ 319
23
Figure 13.12 Deposition Locations for 10 lm Particles in the Planar Geometry for the (a)
Structured Hexahedral Mesh, (b) Unstructured Hexahedral Mesh, (c) Prismatic Mesh, and (d)
Hybrid Mesh – (Courtesy of Samir Vinchurkar & Worth Longest) .......................................................... 320
Figure 13.13 Boundary Layer Transition Between Prismatic and Volume Elements – (Courtesy
of Rousseau et al.)......................................................................................................................................................... 324
Figure 13.14 Example of a hydraulic turbine spiral case (half domain) ........................................... 325
Figure 13.15 Geometry of the Stay Vanes and Wicket Gates, Left: Geometry A, Right: Geometry
B – (Courtesy of Rousseau et al.) ............................................................................................................................ 325
Figure 13.16 Structured Hexahedral Mesh of the Geometry A on the Symmetrical Surface and
Close Up – (Courtesy of Rousseau et al.) ............................................................................................................. 326
Figure 13.17 Hybrid Tetrahedral Medium Mesh on the Symmetric Surface of the Geometry A
(left) & Mesh in the wake of a Hydraulic Profile (wicket gates trailing edge)(right) – (Courtesy of
Rousseau et al.).............................................................................................................................................................. 327
Figure 13.18 Relative Total Head Loss on the Meridian Plane for the Geometry A with fine
mesh, left: Structured Hexahedral, right: Hybrid Tetrahedral – (Courtesy of Rousseau et al.) ... 328
Figure 13.19 Meridian Velocity Near a Stay Vane with fine mesh for Geometry A, left:
Structured Hexahedral, right: Hybrid Tetrahedral – (Courtesy of Rousseau et al.) ......................... 329
Figure 13.20 Meridian Velocity on the Meridian Plane for the Geometry B – (Courtesy of
Rousseau et al.).............................................................................................................................................................. 330
Figure 14.1 B-Spline Approximation of NACA0012 (left) and RAE2822 (right) Airfoils ........... 334
Figure 14.2 Six Control Point Representation of a Generic Airfoil ...................................................... 334
Figure 14.3 Free Form Deformation (FFD) for Volume Grid with Control Points (Courtesy of
Kenway et al.) ................................................................................................................................................................. 335
Figure 14.4 Sample Grid and Grid Sensitivity............................................................................................... 336
Figure 14.5 ONERA M6 grid used for evaluating ........................................................................................ 342
Figure 14.6 Convergence rates for direct and adjoint modes. ............................................................... 343
Figure 14.7 Surface grid for slotted wing–body .......................................................................................... 344
Figure 14.8 Lift-to-drag ratio during unconstrained optimization of wing–body configuration
.............................................................................................................................................................................................. 344
Figure 14.9 Results for irregular but non-curved triangular grids ( Γ = 0 everywhere ). .......... 348
Figure 14.10 Results for an irregular triangular grid over a Joukowsky airfoil. Note that the
contours are plotted in (c) and (d) with a restricted range, [0; 25], to visualize the variation near
the airfoil. The maximum value is 902,788 and the average is 10,505. ................................................. 349
Figure 14.11 Results for an irregular triangular grid over a half-cylinder domain with straight
boundaries....................................................................................................................................................................... 350
Figure 14.12 Mesh Independence ..................................................................................................................... 351
Figure 14.13 Effects of Mesh Density on Solution Domain ..................................................................... 351
Figure 15.1 Mega Meshing for Aircraft Landing & Takeoff – Courtesy of Centaur©..................... 353
Figure 15.2 Vectors used for computing the weighted condition number of a prism at the
corner shared by edges E, F, and G ........................................................................................................................ 355
Figure 15.3 Transforming an Ideal Corner (left) to the Desired Shape (right) .............................. 356
Figure 15.4 Computing the Weight Vector for the Bottom (left) and top (right) Faces of a Prism
.............................................................................................................................................................................................. 356
Figure 15.5 Computing the Weight Vector for the Bottom (left) and Top (right) Faces of a
Hexahedron..................................................................................................................................................................... 357
Figure 15.6 Close-Up View of a Hybrid Mesh Near the Tip of the ONERA M-6 Wing .................. 358
Figure 15.7 40 Extrusion layers on the Symmetry Plane of the ONERA-M6 Wing at the Leading
Edge for Smoothing Exponent P = 0 (left) and P = 2 (right) ....................................................................... 358
Figure 15.8 Preprocessing Workflow .............................................................................................................. 360
Figure 15.9 A Tetrahedral Mesh of a Gas Turbine ...................................................................................... 361
24
Figure 15.10 Skewness of Element Quality (Before Changing the Order of Meshing) ................ 362
Figure 15.11 Skewness of Element Quality (After Changing the Order of Meshing) ................... 363
Figure 15.12 The Coil Geometry - The Zoomed in View shows the Narrow Region Between the
Coil Turns......................................................................................................................................................................... 364
Figure 15.13 Resulting Mesh ............................................................................................................................... 364
Figure 15.14 Meshing Tools in CD-Adapco.................................................................................................... 366
Figure 15.15 Constructions of Hybrid Mesh ................................................................................................. 367
Figure 15.16 Predominantly Polyhedral Meshing with Advanced (Extrusion) Layer in
Boundaries ...................................................................................................................................................................... 368
Figure 15.17 Combined Volume and Extrusion Layer Meshes ............................................................. 368
Figure 15.18 Background Grids for Discretization of the Distance Fnction and the Mesh Size
Function............................................................................................................................................................................ 371
Figure 15.19 Example of gradient limiting with an unstructured background grid .................... 379
Figure 15.20 Another example of gradient limiting, showing that non-convex regions are
handled correctly.......................................................................................................................................................... 380
Figure 15.21 A mesh Size Function Taking into account both Feature Size, Curvature, and
Gradient Limiting. The Feature Size is computed as the sum of the distance function and the
distance to the medial axis. ...................................................................................................................................... 380
Figure 15.22 Generation of a Mesh Size Function for a Geometry with Smooth Boundaries. . 381
Figure 15.23 Cross-Sections of a 3D Mesh Size Function and a Sample Tetrahedral Mesh ...... 381
Figure 15.24 Numerical adaptation for compressible flow .................................................................... 383
Figure 15.25 Gradient limiting with space-dependent g(x). .................................................................. 384
Figure 15.26 Meshing objects in an image ..................................................................................................... 384
Figure 15.27 Gradient limiting with solution-dependent g(h). The distances between the level
sets of h(x) are smaller for small h, giving a faster increase in mesh size. ........................................... 385
Figure 15.28 Delaunay Tessellation of Boundary Vertices used for Background Mesh ............. 387
Figure 15.29 Boundary and Transition Layers Adjacent to Model Boundary ................................ 390
Figure 15.30 Placement of Sizing Vertices at the Boundary and Transition Layers .................... 391
Figure 15.31 Background Mesh used for Boundary Layer Mesh. Contours Generated From
Natural Neighbor Interpolation are Over-Laid ................................................................................................ 391
Figure 15.32 Boundary Layer Mesh With 1:200 Transition................................................................... 392
Figure 15.33 Close-Up of Boundary Layers in Previous Figure ............................................................ 392
Figure 15.34 Approximation of Radius of Curvature Between Two Points A,B on a Surface .. 393
Figure 15.35 Background mesh and contours of sizing function for .................................................. 394
Figure 15.36 Parametric surface meshed with two different max spanning angles (φ). Left
φ=15 degrees, right: φ=30 degrees ...................................................................................................................... 395
Figure 15.37 Comparison of meshing results from: (a) pre-meshed inner circle with no growth
control and (b) from size function with growth controlled ........................................................................ 396
Figure 15.38 Demonstration of the Effective Domain of Size Function ............................................. 399
Figure 15.39 Refinement of Proximity Facets .............................................................................................. 400
Figure 15.40 Refining Criterion for a Background Cell (A - Actual Size Distribution from
Defined Size Functions, B – Size by Linear Interpolation from 8 corner points, C – Source Entities
Possibly with Smaller Size Inside the Cell) ........................................................................................................ 404
Figure 15.41 Meshing the Nasty Clown Using a Single Curvature Size Function .......................... 405
Figure 15.42 Use of Proximity Size Functions in Volume Meshing ..................................................... 406
Figure 15.43 Use of Proximity and Curvature Size Functions in Meshing a Volume with Airfoil
Voids .................................................................................................................................................................................. 407
Figure 15.44 Meshing Results Using Composite Size Functions where Three kinds of Size
Functions are Attached to the Volume ................................................................................................................ 407
Figure 16.1 Predicted Mesh Quality (Volume, Aspect Ratio, and Stretch) ....................................... 410
25
Figure 16.2 A simple Demonstration of How a Poor Mesh from a Cell Geometry Perspective 412
Figure 16.3 Using Kestrel one can Show a Correlation Between Mesh and Solution Quality .. 413
Figure 16.4 Concept of Orthogonality in Cells .............................................................................................. 414
Figure 16.5 Skewness and Warpage................................................................................................................. 415
Figure 16.6 Tetrahedral Volume ........................................................................................................................ 415
Figure 16.7 Triangulation of a polygon ........................................................................................................... 416
Figure 16.8 Tetrahedralization of a Polyhedral (showing a single face) .......................................... 417
Figure 16.9 Boundary (top) and Interior of Polyhedral Mesh............................................................... 418
Figure 16.10 Initial Surface Mesh (top) and Smoothed surface mesh (bottom) ........................... 418
Figure 16.11 Multi-Connected Non-Convex region with a Clearly Invalid Initial 2D Planar Mesh
(left),................................................................................................................................................................................... 418
Figure 16.12 General Estimation of Surface Mesh Element Size .......................................................... 421
Figure 16.13 Mesh Resolution for Sideview Mirror – Courtesy of Lanfrit ...................................... 424
Figure 16.14 Prism Layer Growth – Courtesy of Lanfrit ......................................................................... 425
Figure 16.15 Handling Prism Sides using Non-conformal Interfaces – Courtesy of Lanfrit .... 426
Figure 16.16 Impact of Local Refinement on Tetrahedral Mesh – Courtesy of Lanfrit .............. 427
Figure 17.1 Symmetry plane (XY) ..................................................................................................................... 430
Contributors
➢ Roy Koomullil, Bharat Soni, Rajkeshar Singh ,”A comprehensive generalized mesh system for
CFD applications”, Mathematics and Computers in Simulation 78 (2008).
➢ Narayan, K. Lalit. Computer Aided Design and Manufacturing. New Delhi, 2008.
➢ Duggal, Vijay. Cadd Primer: A General Guide to Computer Aided Design and Drafting-Cadd,
Mailmax Pub. ISBN 978-0962916595, 2000.
➢ Christophe Geuzaine, Emilie Marchandise , and Jean-Francois Remacle, “An introduction to
Geometrical Modelling and Mesh Generation”, The Gmsh Companion.
➢ Butlin, G., Stops C., “CAD Data Repair”, Proc. 5th Int. Meshing Roundtable, pp. 7-12, 1996.
➢ Mezentsev, A.A. and Woehler, T., “Methods and algorithms of automated CAD repair for
incremental surface meshing”, Proc. 8th Int. Meshing Roundtable, Sandia report SAND 99-
2288, pp. 299-309, 1999.
➢ Ribo, R., Bugeda, G. and Onate, E., “Some algorithms to correct a geometry in order to create a
finite element mesh”, Computers and Structures, 80:1399-1408, 2002.
➢ Richardson LF. Weather prediction by numerical process. Cambridge: Cambridge University
Press; 1921.
➢ Edelsbrunner H. “Geometry and topology for mesh generation”, Cambridge: Cambridge
university, 2001.
➢ Baker, T., “Mesh generation: Art or science?” MAE Department, Princeton University,
Princeton, NJ.
➢ Steven J. Owen, “A Survey of Unstructured Mesh Generation Technology”, Carnegie Mellon
University, PA.
➢ Steven Owen: Introduction to unstructured mesh generation, 2005. Baker, T.,J., “Mesh
generation: Art or science?”, MAE Department, Princeton University, Princeton, NJ.
➢ Bauer F, Garabedian P, Korn D. Supercritical wing sections I, Lecture Notes in Economics and
Mathematical Systems, vol. 66. Berlin: Springer; 1972.
26
➢ Moretti G.”Grid generation using classical techniques”. Proceedings of the NASA Langley
workshop on numerical grid generation techniques, Langley, VA, October, 1980.
➢ Caughey DA, “A systematic procedure for generating useful conformal mappings”, Int J Num
Meth Eng 1978.
➢ Eriksson LE,”Generation of boundary-conforming grids around wing-body configurations
using transfinite interpolation”, AIAA J 1982; 20:1313–20.
➢ An overview of Grid Pro/az3000 for automated grid generation.
➢ Churchill, R., V., “Introduction to Complex Variables”, McGraw-Hill, New York.
➢ Joe F. Thompson, Z. U. A. Warsi, C. Wayne Mastin, “Numerical Grid Generation -Foundations
and Applications”, North Holland, 1985.
➢ Peter Eiseman and Robert E. Smith, “Applications of Algebraic Grid Generation”, April 1990.
➢ Sadrehaghighi, I., Smith, R.E., Tiwari, S., N., “Grid Sensitivity and Aerodynamic Optimization Of
Generic Airfoils”, Journal of Aircraft, Volume 32, No. 6, Pages 1234-1239.
➢ M. Farrashkhalvat and J.P. Miles, “Basic Structured Grid Generation”, An imprint of Elsevier
Science Linacre House, Jordan Hill, Oxford OX2 8DP, 200 Wheeler Rd, Burlington MA 01803,
First published 2003.
➢ Feng Liu, Shanhong Ji, and Guojun Liao,” An Adaptive Grid Method and Its Application to Steady
Euler Flow Calculations”, Siam J. Sci. Comput. C° 1998 Society For Industrial And Applied
Mathematics Vol. 20, No. 3, Pp. 811{825.
➢ Mael Rouxel-Labbe, “Anisotropic mesh generation”, Université Côte d’Azur, 2016.
➢ Vangelis Skaperdas, and Neil Ashton, “Development of high-quality hybrid unstructured
meshes for the GMGW-1 workshop using ANSA”, AIAA, January 2018.
➢ David A. Venditti and David L. Darmofal, “Grid Adaptation for Functional Outputs:
Application to Two Dimensional Inviscid Flows", Journal of Computational Physics 176, 40–
69 (2002).
➢ Cavallo, P.A., Sinha, N., and Feldman, G.M.,” Parallel Unstructured Mesh Adaptation For
Transient Moving Body And Aero propulsive Applications”, Combustion Research and Flow
Technology, Inc. (CRAFT Tech), PA.
➢ Cavallo, P.A., Sinha, N., and Feldman, G.M.,”Parallel Unstructured Mesh Adaptation For
Transient Moving Body And Aeropropulsive Applications”, Combustion Research and Flow
Technology, Inc. (CRAFT Tech), PA 18947.
➢ Hrvoje Jasak, ˇZeljko Tukovi´c, “Automatic Mesh Motion for the Unstructured Finite Volume
Method”, ISSN 1333–1124, UDK 532.5:519.6.
➢ JIA Huana, SUN Qin b, “A Comparison of Two Dynamic Mesh Methods in Fluid –Structure
interaction”, School of Aeronautics, Northwestern Polytechnic University, Xi‘an china. 2nd
International Conference on Electronic & Mechanical Engineering and Information
Technology (EMEIT-2012).
➢ Fluent, “Meshing and CFD Accuracy”, CFD Summit, June 2005.
➢ Mitja Morgut, Enrico Nobile, “Comparison of Hexa-Structured and Hybrid-Unstructured
Meshing Approaches for Numerical Prediction of the Flow Around Marine Propellers”, First
International Symposium on Marine Propulsions smp’09, Trondheim, Norway, June 2009.
➢ Daisuke Sasaki, Caleb Dhanasekaran, Bill Dawes, Shahrokh Shahpar, “Efficient Unstructured
Hybrid Meshing and its Quality Improvement for Design Optimization of Turbomachinery”,
European Conference on Computational Fluid Dynamics, ECCOMAS CFD 2006.
➢ Samir Vinchurkar, P. Worth Longest, “Evaluation of hexahedral, prismatic and hybrid mesh
styles for simulating respiratory aerosol dynamics”, Computers & Fluids, 2008.
➢ Sadrehaghighi, I., Smith, R.E., Tiwari, S., N., “Grid Sensitivity and Aerodynamic Optimization Of
Generic Airfoils”, Journal of Aircraft, Volume 32, No. 6, Pages 1234-1239.
27
➢ Per-Olof Persson, “Mesh Size Functions For Implicit Geometries and PDE-Based Gradient
Limiting”, Dept. of Mathematics, Massachusetts Institute of Technology.
➢ Steven J. Owen and Sunil Saigal, “Neighborhood-Based Element Sizing Control for Finite
Element Surface Meshing”, Department of Civil and Environmental Engineering, Carnegie
Mellon University And ANSYS Inc. 275 Technology Drive, Cannonsburg, PA, USA.
➢ Hiroaki Nishikawa, Boris Diskin and James L. Thomas, “Development and Application of
Agglomerated Multigrid Methods for Complex Geometries “, 40th Fluid Dynamics Conference
and Exhibit, 28 June - 1 July 2010, Chicago, Illinois.
29
1 Introduction
1.1 The Black Box Dilemma1
1.1.1 Trust the Mesh Generated by the Software, or Take a Proactive Approach?
Are you the type who likes to take a peek inside the black box to see how it works? Or are you one
who’s willing to put your faith in the black box? Argued [Kenneth Wong] of Digital Engineering’s.
The answer to that may offer clues to the type of meshing applications that appeal to you. But that’s
not the only factor. Your own finite element analysis (FEA) skills also play a role. Most simulation
programs aimed at design engineers offer fully or almost fully automated meshing. In other words,
the software makes most or all of the mesh-related decisions required. Your part may be limited to
selecting the desired resolution or the level of details fine meshing (high resolution, takes more time,
but more accurate) or coarse meshing (low resolution, takes less time, but more approximations
involved).
There are good reasons to keep the meshing process hidden inside the black box, as it were. It takes
a lot of experience and expertise (perhaps even a Ph.D.) to understand the difference between, say, a
hexahedral mesh and a tetrahedral mesh; or tri elements and quad elements. It takes considerable
simulation runs to know what type of meshing methods work well for a particular set of solid
geometry. It requires yet another level of wisdom to know how to manually readjust the software-
generated meshes to more accurately account for the problematic curvatures, corners and joints in
your geometry. These are beyond the scope of what most design engineers do. Therefore, many argue
presenting a design engineer with a menu of these choices is counterproductive. On the other hand,
expert users with a lot of analysis experience know the correlations between mesh types and
accuracy, so they may want to get more involved in the meshing process. For this reason, high-end
analysis software usually offers much more knobs and dials in the meshing process. Depriving expert
users of these choices would force them to accept what they know to be unacceptable
approximations. To navigate between the two different approaches, you need at least some
understanding of how meshing works, automated or manual.
1.1.2 Not All the Meshes Created Equal
According to [Abdullah Karimi], CFD analyst for Southland Industries, uses fluid dynamics programs
to examine airflow and heat distribution to develop the best residential heating solutions for his
company’s clients. Via an online blog by Southland Industries, Karimi penned an article titled “How
Not to Mesh Up: Six Mistakes to Avoid When Generating CFD Grids”. His first tip: Never use the first
iteration of automatically generated mesh. “I’ve realized even some people with Ph.D.’s don’t have a
good grasp on meshing,” he says. “People say, garbage in, garbage out. I say, good mesh equals good
results. But the vast majority of the times I’ve seen the [software’s] automatically generated initial
mesh is too coarse. The mesh may not even work, and if it does, the result may not be accurate.” If
the automatically generated mesh significantly distorts the original geometry’s prominent
characteristics—such as rounded corners, sharp angles and smooth curves it may be a sign that the
mesh needs manual intervention in those specific regions. “You should at least take a look at the
mesh. You can check to see if there are sudden size transitions, aspect ratio for skewness and
triangular distortions. Just by visually inspecting the mesh, you can get a good idea if this may or may
not work for your problem,” says Karimi.
In his article, Karimi advises, “Don’t hit ‘Run’ without a mesh quality inspection. Depending on the
robustness of the solution scheme, this could cause serious issues like straightaway divergence of the
solution ... There are several quality metrics that need attention depending on mesh type and flow
problem. Some of these metrics include skewness, aspect ratio, orthogonality [and] negative volume.”
Figure 1.1 Meshes Created using ANSYS Mosaic-Enabled Poly-Hex Core Meshing - Courtesy of
Sheffield Hallam University
31
mesh and tweak it they can take control,” says Wade. Figure 1.1 shows meshes were created using
ANSYS Mosaic-enabled Poly-Hex core meshing that automatically combines disparate meshes with
polyhedral elements for fast, accurate flow resolution. ANSYS Fluent provides Mosaic-enabled
meshing as part of a single-window, task-based workflow. Image courtesy of Sheffield Hallam
University, Centre for Sports Engineering Research; and ANSYS.
1.1.4 Regional Meshing
The relatively new startup OnScale recently began offering on-demand multi-physics simulation
from the browser. Some firms like Rescale offer high-performance computing (HPC) resources
needed to run simulation, but not the software. By contrast, OnScale offers both the hardware and
the multi-physics solver required to process jobs. “We offer automatic meshing as well as user-
defined meshing. Users can define the level of fidelity desired,” explains Gerald Harvey, OnScale’s
founder and VP of engineering. “OnScale gives you the ability to refine the grid and apply finer
meshes in specific regions.” Not every corner, section or region in your geometry needs fine meshing.
With simple geometry, a coarse mesh with fewer elements may suffice. But in certain regions where
curvature, contact and joints create complex stress concentrations or flow patterns, a finer mesh
(simply put, a higher number of meshes to cover the area) is warranted. Advanced simulation
programs usually offer tools to specify how to treat these regions. Even in programs that target
design engineers, some tools may be available to treat these regions differently.“ In Altair FEA
products like SimLab, you can perform automatic local mesh refinement,” says Eder. “So you can run
an analysis, review the results, then automatically refine the mesh in areas of high strain energy error
density for subsequent runs. In [expert-targeted] HyperMesh, you also have many more manual
mesh refinement options.”
1.1.5 Simulation Cost
OnScale’s Harvey suggests running a mesh study to understand the correlation between the stress
effects and the mesh types and mesh density chosen. This can offer clues on how meshing affects the
FEA results. “Every engineer should conduct a mesh convergence study test the meshes with some
key performance indicators (KPIs) to find a happy medium,” says Harvey. “Suppose you’re looking at
the design of a bracket. Then look at how the different meshes affect the bend angle of the bracket,
for example.” Calculating simulation cost is complex, in part due to the mix of licensing policies in the
market. But fundamentally, two parameters are involved: the time it takes and the hardware it uses.
The need to find simplified meshes (as simple as possible without infringing on the accuracy of
results) largely stems from the desire to keep these two parameters as low as possible. “If you have
a simple solid part and you put 3D meshes on it, it takes more times than necessary to run,” notes
Eder. In such a case, running simulation in a 2D cross-section of the geometry may be much more
efficient. “And think of how many iterations you plan to run, because you’ll be paying that penalty for
every single run,” he adds.
ANSYS’ Wade points out that most solvers prefer hexahedral elements or quad surface mesh because
“they fill the space very efficiently and using such elements when transient or explicit analysis is
required can give massive gains in solve times (minimizing CPU effort for calculations). Hex elements
can follow the flow direction better as well, which has some accuracy benefits. Tetrahedra, polys and
other unstructured methods are very popular because they don’t require the decomposition
(chopping up) of the space like a hex mesh; as a result, they are excellent for automation and really
minimize manual effort.” Another tip from Karimi’s article: “Don’t fill the domain with a ridiculous
number of tetrahedrons. So many times, I see meshing engineers filling up their CFD [computational
fluid dynamics] domains [the target region for fluid analysis] with a large number of tetrahedrons
and then struggling to get simulation results on time.” Certain programs are equipped to make the
mesh selection easier. “With OnScale, you can conduct a study on a sweep of design, with mesh being
one of the variables,” Harvey points out. “In OnScale, the run wouldn’t cost you significantly, because
it would be a one-off cost. And the payback is well worth it.”
32
cells in the boundary layer2. Precipitate of most grid generation procedure can be summarized as
Figure 1.2 provided that everything goes according to plan.
➢ Structured Meshes
• Complex Variables (Restricted to 2D)
• Algebraic Techniques (TFI)
• PDE Methods (PDE)
➢ Unstructured Meshes
• Delany Triangulation
• Advancing Front
• Octree Method
• Polyhedral Meshes
• Overset Meshes
• Cartesian Meshes
➢ Hybrid Meshes
➢ Adaptive Meshes
• Structured
• Unstructured
2 Roy Koomullil, Bharat Soni, Rajkeshar Singh ,”A comprehensive generalized mesh system for CFD applications”,
Mathematics and Computers in Simulation 78 (2008) 605–617.
3 Steven J. Owen, “A Survey of Unstructured Mesh Generation Technology”, Carnegie Mellon University, PA.
4 Introduction: An Initial Guide to CFD and to this Volume; page 1, 2007.
5 Steven Owen: Introduction to unstructured mesh generation, 2005.
34
6 Narayan, K. Lalit (2008). Computer Aided Design and Manufacturing. New Delhi: Prentice Hall of India. p. 3.
7 Duggal, Vijay (2000). Cadd Primer: A General Guide to Computer Aided Design and Drafting-Cadd, Mailmax
Pub. ISBN 978-0962916595.
8 Wikipedia.
9 Same Source.
37
11Christophe Geuzaine, Emilie Marchandise , and Jean-Francois Remacle, “An introduction to Geometrical
Modelling and Mesh Generation”, The Gmsh Companion.
39
does not need to be created with a great deal of accuracy. It just needs to represent the basic geometry
of the cross section. The exact size and shape of the profile is defined through assigning enough
parameters to fully constrain it.
2.2.4 Parametric Modeling
Parametric modeling means that parameters of the model may be modified to change the geometry
of the model. A dimension is a simple example of a parameter. When a dimension is changed, the
geometry of the part is updated. Thus, the parameter drives the geometry. An additional feature of
parametric modeling is that parameters can reference other parameters through relations or
equations. The power of this approach is that when one dimension is modified, all linked dimensions
are updated according to specified mathematical relations, instead of having to update all related
dimensions individually. Simply put, parametric modelling involves the building or design of 3D
geometrical models piece by piece. The process usually starts with a 2D sketch followed by the
integration of constraints, dimensions, and entities to form a defined 3D model. These constraints,
dimensions, and other entities are known as parameters.
Conversely, non-parametric modelling involves a direct approach to building 3D models without
having to work with provided parameters. Therefore, you will not be required to start with a 2D
draft and produce a 3D model by adding different entities. This means you directly model your ideas
without working with pre-set constraints. That is also why non-parametric modelling is also known
as direct modelling. Today, it is a bit difficult to find CAD applications that are solely nonparametric.
This is because most CAD producers integrate features of parametric modelling with features of
nonparametric models.
2.2.4.1 Parameter Space
The parameter space is the space of possible parameter values that define a particular mathematical
model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of
a function, in which case the technical term for the parameter space is domain of a function. The
ranges of values of the parameters may form the axes of a plot, and particular outcomes of the model
may be plotted against these axes to illustrate how different regions of the parameter space produce
different types of behavior in the model12.
2.2.5 History-Based Modeling
The last aspect of solid modeling is that the order in which parts are created is critical. This is known
as history-based modeling. For example, a hole cannot be created before a solid volume of material
in which the hole occurs has been modeled. If the solid volume is deleted, then the hole is deleted
with it. This is known as a parent-child relation. The child (hole) cannot exist without the parent
(solid volume) existing first. Parent-child relations are critical to maintaining design intent in a part.
Most solid modeling software recognizes that if you delete a feature with a hole in it, you do not want
the hole to remain floating around without being attached to the feature. Consequently, careful
thought and planning of the base feature and initial additional features can have a significant effect
on the ease of adding subsequent features and making modifications.
2.2.6 Associative Modeling
The associative character of solid modeling software causes modifications in one object to \ripple
though" all associated objects. For instance, suppose that you change the diameter of a hole on the
engineering drawing that was created based on your original solid model. The diameter of the hole
will be automatically changed in the solid model of the part, too. In addition, the diameter of the hole
will be updated on any assembly that includes that part. Similarly, changing the dimension in the part
model will automatically result in updated values of that dimension in the drawing or assembly
incorporating the part. This aspect of solid model software makes the modification of parts much
12 Wikipedia,
40
easier and less prone to error. As a result of being feature based, constraint based, parametric, history
based, and associative, modern solid modeling software captures \design intent", not just the design.
This comes about because the solid modeling software incorporates engineering knowledge into the
solid model with features, constraints, and relationships that preserve the intended geometric
relationships in the model.
➢ The Union A ⋃ B
operation returns of
all the points x ∈ R3
that are either inside
A or inside B.
➢ The Intersection A ⋂
B operation returns
of all the points x ∈
R3 that are both
inside A and inside
B.
➢ The Difference A n B
operation returns of
all the points x ∈ R3
that are inside A and Figure 2.3 Example of a CSG Tree
outside B.
Regularized Boolean operators differ from the set-theoretic ones in that dangling lower dimensional
structures are eliminated, all remaining faces, edges and vertices belonging to the closure of the
resulting volume.
41
13Mark W. Beall1, Joe Walsh2, Mark S. Shephard, “Accessing CAD Geometry For Mesh Generation”.
14Hidemasa Yasuda, Taku Nagata, Atsushi Tajima, Akio Ochi, “KHI Contribution to GMGW-1”, Kawasaki, 1st
Geometry and Mesh Generation Workshop Denver, CO June 3-4, 2017.
42
independent of the analysis to be performed. A-prior element shape quality test have often been used
as a misleading indicator of a good mesh independent of the analysis to be performed or the accuracy
desired. The appropriate mesh is one that produces the desired accuracy for the problem to be
solved. In practice this is only achievable through adaptively. Different types of analyses require
different instances of the geometry to capture the physics. For example, we can perform a dynamic
structural response analysis and a Computational Fluid Dynamics (CFD) analysis on the same part.
The dynamic structural response analysis requires the solid geometry of the part while the CFD
analysis requires the geometry of the cavities through which the fluid will flow. This simple
illustration of different use of geometry representations is illustrated in Figure 2.5. Dynamic
structural response analysis requires solid geometry of the part. While CFD analysis requires
geometry of the flow cavities. Different types of analysis also require different resolutions of mesh to
achieve the desired accuracy on a particular design.
2.5.2 Dis-Featuring
Disfeaturing is one of the most complex issues associated with CAD geometry access for mesh
generation. Indeed one of the major issues that the CAD and CAE software industries have
encountered is developing a consistent definition of a feature. For the purposes of this paper we will
classify features into two main groups. The first group of features will be called “intended features”.
Intended features are features that were explicitly defined as features in the model that drive the
resulting geometry. In this case a feature-based modeling system was used to create a model which
contains intended features. Intended features can only be created by feature-based modeling
systems and can be suppressed by the original modeling system. The second group of features will
be called “artifact features”. Artifact features are features that are created indirectly by the modeling
process. One example of artifact features is the creation of engineering features such as holes by a
modeling system that is not feature-based. The second example of artifact features is the creation
of recognizable patterns of geometry / topology data that create a valid design model but also create
difficulties associated with mesh generation. Artifact features can be created from any modeling
system and cannot be suppressed in the original modeling system. Figure 2.6 illustrates small
features removed from geometry.
43
Part of the complexity associated with CAD geometry access for mesh generation is due to the fact
that historically analyses are performed too late in the design process and the design model contains
more details than are appropriate for analysis. Moving the analysis earlier in the design process will
help to reduce, but will not remove, the need for defeating. Since multiple analysis types may be
required for any design state there remains a need for defeating to various levels to support the range
of analysis to be performed.
15 Butlin, G., Stops C., “CAD Data Repair”, Proc. 5th Int. Meshing Roundtable, pp. 7-12, 1996.
16 Mezentsev, A.A. and Woehler, T., “Methods and algorithms of automated CAD repair for incremental surface
meshing”, Proc. 8th Int. Meshing Roundtable, Sandia report SAND 99-2288, pp. 299-309, 1999.
17 Ribo, R., Bugeda, G. and Onate, E., “Some algorithms to correct a geometry in order to create a finite element
manifold geometry is problematic in mesh generation sense and should be avoided or resolved prior
to that.
45
Figure 3.1 Domain Topology (O-Type, C-Type, and H-Type; from left to right)
18 Richardson LF. Weather prediction by numerical process. Cambridge: Cambridge University Press; 1921.
19 Edelsbrunner H. “Geometry and topology for mesh generation”, Cambridge: Cambridge university, 2001.
20 Baker, T., “Mesh generation: Art or science?” MAE Department, Princeton University, Princeton, NJ.
21 Julian Marcon, Michael Turner, and Joaquim Peiro, “High-order curvilinear hybrid mesh generation for CFD
simulations”, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom.
46
When it comes to mesh parameters, the studies show that with carefully chosen mesh spacing
around the leading edge, good orthogonality and skewness factors, smooth spacing variation, and a
reasonable number of nodes,
excellent CFD results can be
obtained from the mesh in
terms of accuracy of
computed functional,
determined convergence
order and adjoin error
estimation. Now with regard
to Topology of individual cells
where three types are
considered; Hexahedral,
Tetrahedral, and Polyhedral.
In essence, topology is a
structure of blocks that acts
as a framework for placing
mesh elements. Blocks are
laid out without gaps with
Figure 3.2 C-H Type Topology for a Wing Section
shared edges and corners and
contain same number of
elements along each side. Number of blocks will dictate the skewness of the grid elements. As a
example, Figure 3.2 displays a 3D C-H topology for a wing section, where red represents the C-type
and blue the H-type22.
to a new set of fairly straightforward equations without messy cross-derivative terms. In addition,
the orthogonality and smoothness properties of review of conformal mapping meshes obtained in
this manner produce a high quality mesh in physical space. Perhaps the first published application of
conformal mapping to (CFD) is circle plane mapping that transforms the space exterior to an airfoil
onto the interior of the unit circle. This particular conformal mapping technique extends back a long
way but its use for creating suitable meshes was a novel application. The same mapping was later
used by [Bauer et al.]24 when they developed the first transonic flow code for solving the full potential
equation. Other conformal mappings were developed to handle axisymmetric inlets and airfoil/slat
combinations. A comprehensive techniques for mesh generation has been given by Moretti25. Another
useful reference is the paper by26.
24 Bauer F, Garabedian P, Korn D. Supercritical wing sections I, Lecture Notes in Economics and Mathematical
Systems, vol. 66. Berlin: Springer; 1972.
25 Moretti G.”Grid generation using classical techniques”. Proceedings of the NASA Langley workshop on
Figure 3.5 Dual Block Grid Topology for a Generic Wing-Fuselage Configuration
29David E. Keyes, “Domain Decomposition Methods for Partial Differential Equations”, Department of Applied
Physics & Applied Mathematics Columbia University.
50
tedious which of course is true. But the reward is in complete control of grid and its quality,
something which is usually hard to come by in automated unstructured grid generates.
30Zaib Ali, James Tyacke, Paul G. Tucker, Shahrokh Shahparb, “Block topology generation for structured multi-
block meshing with hierarchical geometry handling”, 25th International Meshing Roundtable (IMR25), 2016.
51
31 T. Tam, C. Armstrong, 2D finite element mesh generation by medial axis subdivision, Adv. Eng. Software (1991).
32 D. Sheehy, C. Armstrong, D. Robinson, Computing the medial surface of a solid from a domain Delaunay
triangulation, ACM Symposium on Solid Modeling Foundations and Applications, 1995.
33 T. D. Blacker, R. J. Myers, Seams and wedges in Plastering: A 3D hexahedral mesh generation algorithm,
generation, Proceedings of 14th International Meshing Roundtable, Sandia National Lab, 2005.
35 D. Rigby, “A technique for automatic multi-block topology generation using the medial axis”, NASA/CR
FEDSM2003-45527 (2004).
36 P.-E. Danielsson, “Euclidean distance mapping, Computer Graphics and image processing”, (1980).
37 J. Vleugels, M. Overmars, Approximating generalized Voronoi diagrams in any dimension (1995). Technical
block meshing with hierarchical geometry handling”, 25th International Meshing Roundtable (IMR25), 2016.
42 W. R. Quadros, LayTracks3D: a new approach to meshing general solids using medial axis transform, Procedia
Engineering 82 (2014).
52
[Malcevic]43 presents another automated blocking strategy based on a Cartesian fitting method.
While preserving the topology definition, a forward geometry simplification is performed followed
by fitting the model into a Cartesian framework. The next step is blocking the domain after which the
blocked model is mapped back on to the original geometry. Further operations such as removing
singularities by J-grid wrapping are performed to enhance the mesh quality. This technique has been
applied for meshing the end-wall cavities found in turbomachinery. This technique is very simple and
but has only been demonstrated for 2D cases so far. The method sometimes produces some
unnecessary mesh clustering across the block interfaces. An assessment of various automatic block
topology generation techniques surveyed above has been performed in44-45. The comparison has
been carried out using an adjoint based error analysis of the meshes generated by these block
topologies. It is found that, in general, the medial axis based approaches provide optimal blocking
and yields better accuracy in computing the functional of interest. Mostly, domains having internal
flows were used for this assessment. However, the medial axis based methods may not always yield
an optimal block topology when dealing with complex 3D geometries and external flows. To
overcome this limitation, a hybrid blocking technique is illustrated which makes use of the distance
field iso-surface in addition to the medial axis transform. This is demonstrated using a wing-body-
tail and a jet-wing-flap configurations. In addition to that, to reduce the meshing effort, a hierarchical
geometry handling approach is also defined and applied to an engine-wing-flap configuration. These
approaches are described next.
Figure 3.11 Jet-Wing-Flap: Medial Axis Transform (Compression Shock) and Expansion Features Close
to the Geometry – Courtesy of [Ali et al]
43 I. Malcevic, Automated blocking for structured CFD gridding with an application to turbomachinery secondary
flows, 20th AIAA Computational Fluid Dynamics Conference, Honolulu, Hawaii, 2011.
44 Z. Ali, P. G. Tucker, Multiblock structured mesh generation for turbomachinery flows, Proceedings of the 22nd
1. The distance field is computed around the domain of interest as shown in the Figure 3.12
(a). An exact equation for the wall distance d is the hyperbolic eikonal equation which
models the front propagation from the surface at unit velocity.
|∇d| = 1 + Γ ∇2 d
Eq. 3.1
where Γ −→ 0 yielding viscosity solutions. The first arrival time of the front for a unit velocity
is equal to the wall distance. The eikonal equation is solved using a fast marching method46.
A suitable iso surface is then extracted from d. This iso surface selection is currently arbitrary
but it can be linked to a criteria. For example, the width of the shear layer regions in the jet
wake can dictate this selection upstream or it could be based upon the dimensionless wall
distance y+ value. The iso surface acts like a virtual geometry or a wrap around the real
domain (see Figure 3.12 (b)).
Figure 3.12 Two dimensional jet-wing-flap geometry: (a) the distance field; (b) distance field wrap and
(c) corresponding medial axis (d) hybrid blocking around 2D geometry – Courtesy of [Ali et al]
46H. Xia, P. Tucker, Finite volume distance field and its application to medial axis transforms, International
Journal for Numerical Methods in Engineering 82(1) (2010).
54
2. The next step is approximation of the medial axis between the geometry and the distance
field wrap. The Voronoi diagram based algorithm of [Dey and Zhao]47 is used here for the
medial axis approximation. This algorithm provides a more stable and continuous medial axis
for complex 3D domains than the voxel thinning approach. The input to this program is the
point cloud data of the geometry and the distance field iso surface. It makes use of the
observation that certain Voronoi facets are positioned close to the medial axis if their dual
Delaunay edges tilt away from the surface or are very long. Hence, the angle condition and
the ratio condition are defined to filter such tilted and long Delaunay edges and the medial
axis is approximated. Let VP be the Voronoi diagram for a dense point set P from a smooth
compact surface S ⊂ R3. This Vornoi diagram is a cell complex comprising of Voronoi cells Vp
p ∈ P and their facets, edges and vertices. Also, for each point x ∈ S ,
Vp = x ∈ ℝ3 ‖p − x‖ ≤ ‖q − x ‖ , ∀q ≠ p
Eq. 3.2
where p and q are any two points in P. Let DP be the Delaunay triangulation of P and dual to
the Voronoi complex. The Delaunay triangles incident to point p which are dual to the Voronoi
edges intersected by a tangent plane at p are used to construct the criteria. All the Delaunay
edges that make relatively large angle with the planes of the triangles are filtered. If the angle
between the vector tpq from p to q and the normal nptu to a triangle ptu is less than a threshold
angle π/2 − θ for all the triangles, then that associated Delaunay edge is filtered i.e.
π
max ∠ 𝐧𝐩𝐭𝐮 , t pq < −θ
2
Eq. 3.3
gives good results. The ratio condition is based on the comparison of the length of the
Delaunay edges with the circum-radii of the triangles. Thus those Delaunay edges are filtered
which satisfy the criteria:
‖p − q‖
min >ρ
r
Eq. 3.4
Where ∥p − q∥ defines the length of a Delaunay edge and ρ is the circum-radius of a triangle
ptu. A value of ρ = 8 is normally used for dense point clouds. The medial axis is generated as
a continuous surface which can be imported into the mesh generator. The medial axis for the
JWF slice is shown in the Figure 3.12 (c).
3. To complete the blocking process, additional rules as described in the Section 1 are manually
used. Applying the rules, for example to the 2D JWF slice, the expansion features are
connected to the nearest medial vertex or otherwise the medial axis as shown in the Figure
3.12 (d).
4. Once the critical parts of the domain have been blocked using the medial axis, the far-field
region can be partitioned using simple Cartesian fitting or H-type blocks. This is shown, for
example, in Figure 3.12 (d) with the green lines. This resulting domain decomposition is
significantly better than the one obtained initially shown in the Figure 3.11. There can still
be some regions where the block topology is unsatisfactory. Such areas must be manually
altered. Hence, a semi-automatic blocking process arises. The mesh is then generated in the
commercial program Pointwise.
47 T. K. Dey, W. Zhao, Approximate medial axis as a Voronoi subcomplex, Computer-Aided Design 36 (2004).
55
kp kp kp
Fp,x = Vx Vrel , Fp,θ = Vθ Vrel , Fp,r = VV
s s s r rel
k n 𝑉𝜃 k n 𝑉𝑥
Fn,x = f (Vx Vθ , α) , Fn,θ = f (Vx Vθ , α)
s 𝑉𝑟𝑒𝑙 s 𝑉𝑟𝑒𝑙
Eq. 3.5
48 T. Cao, P. Hield, P. G. Tucker, Hierarchical immersed boundary method with smeared geometry, 54th AIAA
Aerospace Sciences Meeting, AIAA Science and Technology Forum and Exposition, 2016, pp. 2016–2130.
49 T. Cao, N. R. Vadlamani, P. G. Tucker, A. R. Smith, M. Slaby, C. T. J. Sheaf, Fan-intake interaction under high
incidence, Proc. of ASME Turbo Expo, Seoul, South Korea, 2016. ASME Paper Number GT2016–56561.
56
Here, Vx, Vθ and Vr are the axial, tangential and radial velocity components. Vrel is the magnitude of the
fluid velocity relative to the blade. Kp and Kn are calibration constants. Also, α and s are the local blade
metal angle and blade pitch respectively. The above equations can also be modified to produce local
blockage terms.
3.4.4 Results
3.4.4.1 NASA CRM Wing-Body-Tail
In this section, the hybrid blocking is applied to partition the domain around a 3D NASA Common
Research Model (CRM) horizontal wing-body-tail configuration. This model represents a modern,
transonic and commercial aircraft designed to cruise at M∞ = 0.85 and CL = 0.5. The geometric and
aerodynamic details about the model as described. Further information can be obtained from the
development by [Ali et al.]50. The medial axis around the wing and tail also provides a block topology
similar to O-type or C-type meshes. To assist the blocking, expansion features at the trailing edges of
the wing and the tail are joined to the nearest medial axis . After the blocking around the geometry is
complete, the far-field domain partitioning is carried out. The region is partitioned to create a H-type
mesh. The block topology around the model is shown in the Figure 3.14 (c). The volume and the
surface mesh cuts are displayed in the Figure 3.14 (d). The NASA CRM configuration has been the
test case for the 4th and 5th AIAA CFD drag prediction workshops51. The aim of the workshop is to
assess the state-of-the-art in the CFD methods for aircraft aerodynamic analysis. Here, we use the
same flow conditions as given in the workshop to compute the flow around the test case. The result
of MAT based topology was to create 256 blocks in 2 minutes, and gridding in 6 minutes. The
simulations are performed in HYDRA which is an unstructured, finite volume, edge-based and
compressible flow solver using MUSCL based flux differencing.
The simulation is carried out at M∞ = 0.85 and CL = 0.5 with Re = 5 × 106 based on the reference chord
length Cref = 7.00532 m. Table 3.1 describes the free-stream flow conditions. A coarse mesh of
Figure 3.14 NASA CRM Wing-Body-Tail (c) Hybrid Blocking (d) Mesh Cut Section – Courtesy [Ali et al.]
50 Zaib Ali, James Tyacke, Paul G. Tucker, Shahrokh Shahparb, “Block topology generation for structured multi-
block meshing with hierarchical geometry handling”, 25th International Meshing Roundtable (IMR25), 2016.
51 J. Vassberg, E. N. Tinoco, M. Mani, B. Rider, T. Zickuhr, D. Levy, O. P. Brodersen, B. Eisfeld, S. Crippa, R. A.
Wahls, et al., Summary of the Fourth AIAA CFD Drag Prediction Workshop (2010). AIAA Paper No. AIAA-2010.
57
approximately 4 million cells is used. The first grid node from the wall
is located at y+ ≈ 1. The Spalart Allmaras (SA) turbulence model is used M∞ 0.85
for this simulation. The flow angle for this mesh to gain CL = 0.5 is α = Ptotal 201326.9 Pa
2.36 deg. Ttotal 310.93 k
3.4.4.2 Jet-Wing-Flap Table 3.1 NASA CRM
In this section, the 3D jet-wing-flap case is presented. The geometry Free-Stream Conditions
comprises of co-axial nozzle, pylon and a wing with a flap as shown in
the Figure 3.15 (a). This realistic aero-engine geometry has been used
for detailed computational aero-acoustics analysis,. The pylon adds complexity to the otherwise
cylindrical nozzle topology along with the wing and the flap. Hence, blocking such a case demands
significant user insight. After wrapping the distance field, the medial axis is approximated. This is
shown in the Figure 3.15 (b). To simplify the blocking procedure, small medial axis branches are
removed for this case. This is followed by the inner blocking aided by the rules which is shown in the
Figure 3.15 (c). The far-field domain decomposition can be then be carried out at this stage.
However, one of the aims for the aero-acoustic jet simulations is to properly resolve the shear layers
in the far-field. This requires a good quality mesh aligned with the shear layer regions. The current
block topology as shown in the Figure 3.15 (c) is non-optimal for properly resolving the shear
layers produced by the bypass and the core flows. Hence a manual alteration of the blocking around
the pylon and the core flow exhaust is performed.
Figure 3.15 Jet-Wing-Flap (a) CAD, (b) CAD and the Medial Axis Cut Section, (c) Inner Hybrid Blocking –
Courtesy of [Ali et al]
The modified inner block topology with the far-field decomposition are shown in the Figure 3.16.
A RANS simulation using the SST k − ω is carried out on the mesh generated by the hybrid blocking.
The first grid node from the wall is located at y+ ≈ 1. The two cases presented above show how the
hybrid blocking approach can be effectively used to decompose and mesh the complex geometries.
The medial axis based domain decompositions also provide meshes having better flow alignment
when compared to other partitioning methods e.g. Cartesian fitting and cross field based techniques.
Hence, this technique further enhances the scope and applicability of these MAT based blocking
methods. Also, the blocking templates could be generated using this approach which can speed up
the mesh generation process and aid an inexperienced CFD user.
58
3.4.4.3 Engine-Wing-Flap
In the last section, a coaxial nozzle
with pylon and wing geometry was
presented which makes the rear or
downstream part of the aero-engine.
To carry out a more realistic
simulation, the front engine part
containing the axisymmetric intake,
hub and splitter geometry is added to
this rear part using the overset mesh
at the interface. This procedure
avoids re-blocking the domains to
have a cell to cell match between the
two zones. Also, a smeared fan
geometry is used where the fan is
modeled using the BFM (see the
schematic in Figure 3.17 (Top)).
The other downstream components
are imprinted and treated again with
the BFM but with localized sources. Figure 3.16 Jet-Wing-Flap with Modified Inner Blocking (To
These internal geometry components Accommodate Shear Layers) – Courtesy of [Ali et al]
include the downstream vanes,
gearbox shaft, and the engine supporting A-frames. Thus using a hierarchical geometry handling
approach, a complex domain can be readily meshed and analyzed for in a design optimization cycle.
A cut section of the mesh at z=0 plane is shown in the Figure 3.17 (Bottom). The total mesh size is
50 million. The internal geometry in the front part is modeled using a body force model which has
been extended to include the local blockage and wakes modeling. This is done by adding local
enhanced source terms to generate wake zones, which is similar to adding the source terms in the
IBM for simulating geometry or boundary on a non-conformal Cartesian mesh.
59
Figure 3.17 Engine-Wing-Flap (Top) Schematics Showing Geometric Zones and Domains with Different
Block Topologies (Bottom) Cut Section at z = 0 Plane – Courtesy of [Ali et al]
60
N
ξ M
η
r( , ) = n r(ξ n , η) + ψ m r( , m ) −
n =1 I m =1 J
N M
ξ η
n =1 m =1
n ψ m r(ξ n , ηm )
I J
Eq. 4.1
Where now the "blending" functions, φn and ψm, are any functions which satisfy the cardinality
conditions:
ξ η
n L = δ nL n, L = 1,2,..., N and ψ m L = δ mL m, L = 1,2,...., M
I J
Eq. 4.2
4.2.1 Blending Function
The interpolation function defined by Eq. 4.2 can be thought of two unidirectional interpolation the
corner points which has been duplicated. With N = M = 2, using the Lagrange interpolation
polynomials as the blending functions, is termed the transfinite bilinear interpolant. With N = M = 3,
this form is the transfinite bi-cubic-interpolation. Other candidates for the blending functions are the
Exponential, Hermit Interpolation Polynomials and Splines. For example, for n, L = 2, Eq. 4.3
shows a typical Exponential blending function as:
ξ − Xi
Exp [(Bi−1 ) −1
Xi − Xi−1 ]
r(ξ) = Yi + [Yi − Yi−1 ] for Xi−1 ≤ ξ ≤ Xi
Exp (Bi−1 ) − 1
The integer m represents number of control points with coordinates {Xi, Yi}. The quantity Bi-1, called
the stretching parameter, is responsible for grid density. Specifying B1, values of Bi ≥ 2 are obtained
by matching the slopes at control points. This, guaranteeing a smooth grid transition between each
region, can be accomplished using Newton's iterative scheme which is quadratically convergent. The
greater the B , the less discontinuity will propagate. Similarly, a blending function could be
constructed for η direction. The spline-blended form gives the smoothest grid with continuous
second derivatives53. An example of exponential stretching (Bi = -2) is given by Figure 4.1.
The exponential function, while reasonable, is not the best choice when the variation in grid spacing
is large. The truncation error associated with the metric coefficients is proportional to the rate of
change of grid spacing. A large variation in grid spacing, such as the one resulting from exponential
function, would increase the truncation error, hence, attributing to the solution inaccuracies. A
suggested alternative to exponential function has been the usage of hyperbolic sine function given as
ξ − Xi
sinh [(Bi−1 )
Xi − Xi−1 ]
r(ξ) = Yi + [Yi − Yi−1 ] for Xi−1 ≤ ξ ≤ Xi
sinh (Bi−1 )
where 0 ≤ X𝑖 , Y𝑖 ≤ 1 , 0 ≤ ξ , r(ξ) ≤1 , i =2, , , , m
Eq. 4.4
The hyperbolic sine function gives a more uniform distribution in the immediate vicinity of the
boundary, resulting in less truncation error. This criteria makes the hyperbolic sine function an
excellent candidate for boundary layer type flows. A more appropriate function for flows with both
viscous and non-viscous effect would be the usage of a hyperbolic tangent function such as
53 Joe
F. Thompson, Z. U. A. Warsi, C. Wayne Mastin, “Numerical Grid Generation -Foundations and Applications”,
North Holland, 1985.
62
(B ) ξ − Xi
tanh [ i−1 ( − 1)]
2 Xi − Xi−1
r(ξ) = Yi + [Yi − Yi−1 ] for Xi−1 ≤ ξ ≤ Xi
Bi−1
tanh [ ]
2
w here 0 ≤ X𝑖 , Y𝑖 ≤ 1 , 0 ≤ ξ , r(ξ) ≤ 1 , i =2, , , , m
Eq. 4.5
The hyperbolic tangent, as revealed in Figure 4.2, gives more uniform distribution on the inside as
well as on the outside of the boundary layer to capture the non-viscous effects of the solution. Such
overall improvement, makes the hyperbolic tangent a prime candidate for grid point
distribution in viscous flow analysis, as publicized in Figure 4.3 for a generic wing-fuselage. A
numerical approximation can be used to compute the grid-point distribution on a boundary curve.
This approach is widely used for complex configurations and care must be taken to insure
monotonicity of the distribution. For example, the natural cubic spline is C2 continuous and offers
great flexibility in grid spacing control. However,
some oscillations can be inadvertently introduced
into the control function. The problem can be
avoided by using a smoothing cubic spline
technique and specifying the amount of smoothing
as well as control points. Another choice would be
the usage of a lower order polynomial such as
Monotonic Rational Quadratic Spline (MRQS)
which is always monotonic and smooth. Other
advantage of MRQS over cubic spline is that it is an
explicit scheme and does not require any matrix
inversion. A sample coding in FORTRAN is given in
Appendix A and the resultant grid and topology for
a dual-block generic airplane geometry is displayed.
A pioneering work in control point form of Figure 4.3 Grid for Dual-Block Generic
Algebraic Grid Generation using a univariate Wing-Fuselage Geometry
interpolations can be attributed to [Eiseman and
63
Smith]54.
4.2.1.1 Case Study- Rapid Meshing System For Turbomachinery
PADRAM (Parametric Design and Rapid Meshing System), grid generation starts by dividing the
computational blocks into sub-blocks for the purpose of generation of the algebraic grid and the
control functions. O-type grid is used for the blades and H-type grids near the periodic boundary,
upstream and downstream blocks, C-type grids are used for semi-infinite boundaries such as the
splitter, pylon and RDF (Radial Drive Fairing). Transfinite interpolation (TFI) is used to generate the
initial grid based on a linear interpolation of the specified boundaries. TFI is very easy to program,
computationally efficient and the grid spacing is under direct control. PADRAM uses the following
double clustering functions [ Shahpar and Lapworth]55:
η−α
β + 1 1−α
(2α + β) [ ] + 2α − β
β−1
y= η−α where 1< β <∞ and 0 ≤ α ≤1
β + 1 1−α
(2α + 1) {1 + [ ] }
β−1
Eq. 4.6
Where η is the non-dimensional grid point distribution in the computational plane which varies
between zero and 1, β is the clustering function, more clustering is achieved by letting βto approach
1 and α is a non-dimensional quantity that indicates which grid location the clustering should be
attracted to, e.g. a value of 0.5 ensures clustering is uniformly done at both ends. Figure 4.4 (a)
54Peter Eiseman and Robert E. Smith, “Applications of Algebraic Grid Generation”, April 1990.
55Shahrokh Shahpar and Leigh Lapworth, “PADRAM: Parametric Design And Rapid Meshing System For
Turbomachinery Optimization”, Proceedings of ASME Turbo Expo 2003, USA.
64
shows the PADRAM mesh topology for a single OGV mesh. Note that the upstream and downstream
H-mesh can also be rotated to align the mesh with the blade inlet and exit metal angles. Tip Gaps Tip
clearance is often required for shroud less rotor blades such as fans, compressors and some turbines
as well as hub clearance for cantilevered stator blades. Tip leakage flows are often poorly resolved.
If the number of points in the tip clearance is too low, then the complex physical phenomena that
occur there cannot be accurately modelled. It has been found that the flow solution can be sensitive
how the gap is modelled and in some solvers the geometry is changed to alleviate the problem, e.g.
sheared H-meshes often use the so called “pinched” tip gap. However, PADRAM models the gap in a
consistent way that meshes the blade geometry. Once the O-grid is generated with the outer domain
of the blade, the grid corresponding to the solid part of the domain is constructed using the same
boundary node distribution. It is important to keep the mesh spacing in the inner and outer part of
the wall as close as possible to each other. Figure 4.4 (b) shows a typical tip gap mesh for a
compressor rotor generated in PADRAM (the tip gap has been increased to 5% span for
demonstration). PADRAM can construct a viscous mesh for the complete bypass assembly consisting
of 52 different staggered and three types over and under cambered OGVS, pylon, RDF and a splitter
in a matter of minutes. The details of the mesh near the OGV, Pylon, RDF and the splitter are shown
in Figure 4.5. For complete details, please see the work by [ Shahpar and Lapworth]56.
Figure 4.5 Detail of the Splitter C-Mesh, Hub Mesh and the Engine-Core Exit Mesh - Courtesy of
[Shahpar and Lapworth]
56Shahrokh Shahpar and Leigh Lapworth, “PADRAM: Parametric Design And Rapid Meshing System For
Turbomachinery Optimization”, Proceedings of ASME Turbo Expo 2003, USA.
65
be elliptic, while if the specification is on only a portion of the boundary the equations would be
parabolic or hyperbolic. This latter case would occur, for instance, when an inner boundary of a
physical region is specified, but a surrounding outer boundary is arbitrary. The present chapter,
however, treats the general case of a completely specified boundary, which requires an elliptic partial
differential system.
4.3.1 Elliptic Schemes
At this stage grid is smooth enough to satisfy majority of applications, but if needed, further
smoothing is obtained with solution of elliptic partial differential equations (PDE). For 2D
formulation, forcing function terms are used to construct stretched layers of the cells close to the
domain boundaries.
Where (ξ, η) are the coordinates in the computational domain. Control functions are computed using
the boundary point spacing, r, and then interpolated to the inner points57. The forcing terms (P, Q)
are computed as:
r 2 r r 2 r
ξ ξ 2 η η2
P=− 2
, Q= 2 Eq. 4.8
r r
ξ η
Once P and Q are obtained at each boundary the values for the inner points are obtained using a linear
interpolating along lines of constant ξ and η:
P( , ) = (1 − η) P1 ( ) + ηP2 ( ) 0 ξ 1
Q( , ) = (1 − ξ) Q1 ( ) + ξQ 2 ( )
Eq. 4.9
0 η 1
Grid control of orthogonality at boundaries is introduced adding a second term in P and Q as:
r 2 r r 2 r r 2 r r 2 r
ξ ξ 2 ξ η2 η η η ξ 2
2
P=− 2
−λ 2
, Q= 2
−λ 2 Eq. 4.10
r r r r
ξ η η ξ
Where 0 < λ < 1 is a factor that relaxes the orthogonality at the boundaries. It has been observed that
the range λ∈ [0.4-0.7] produces optimal results for our configurations. The elliptic PDEs are solved
using a multi-grid method and the smoother is based on a point-wise Newton solver. When the
57Thomas P., and Middlecoff J., ”Direct control of the Grid Point Distribution in Meshes Generated by Elliptic
Equations”, AIAA Journal Vol. 18, No. 6., 1980.
66
58 Sorenson R. L. and Steger J. L. Numerical Generation of Two dimensional Grids by the Use of Poisson Equations
with Grid Control, in Numerical Grid Generation Techniques, R. E Smith, ed.. NASA CP 2166, NASA Langley
Research Center, Hampton, VA, USA, 1980.
59 Chaitanya Varier 2017.
60 M. Farrashkhalvat and J.P. Miles, “Basic Structured Grid Generation”, An imprint of Elsevier Science Linacre
House, Jordan Hill, Oxford OX2 8DP, 200 Wheeler Rd, Burlington MA 01803, First published 2003.
67
(a) Before
(b) After
61 Ahmed Khamayseh, Andrew Kuprat, C. Wayne Mastin, “Boundary Orthogonality in Elliptic Grid Generation”,
CRC Press LLC, 1999.
62 Thompson, J.F., A general three-dimensional elliptic grid generation system on a composite block structure,
convergence.
This method, which is taken up here, is appropriate for nonphysical (internal) grid boundaries, since
grid spacing present in the initial boundary distribution is usually not maintained. Previous methods
for implementing Neumann orthogonality have relied on a Newton iteration method to locate the
orthogonal projection of an adjacent interior grid point onto the boundary. The Neumann
orthogonality method presented here uses a Taylor series to move boundary points to achieve
approximate orthogonality. Thus, there is no need for inner iterations to compute boundary grid
point positions.
In Dirichlet orthogonality , also taken up in this chapter, control functions (called orthogonal
control Functions ) are used to enforce orthogonality near the boundary while the initial boundary
grid point distribution is not disturbed. Early papers using this approach were written by
[Sorenson]63 and [Thomas & Middlecoff ]64. In Sorenson’s approach, the control functions are
assumed to be of a particular exponential form. Orthogonality and a specified spacing of the first grid
line off the boundary are achieved by updating the control functions during iterations of the elliptic
system. [Thompson]65 presents a similar technique for updating the orthogonal control functions.
This technique evaluates the control functions on the boundary and interpolates for interior values.
A user-specified grid spacing normal to the boundary is required.
The technique of [Spekreijse]66 automatically constructs control functions solely from the specified
boundary data without explicit user-specification of grid spacing normal to the boundary. Through
construction of an intermediate parametric domain by arclength interpolation of the specified
boundary point distribution, the technique ensures accurate transmission of the boundary point
distribution throughout the final orthogonal grid. Applications to planar and surface grids are given
in67.
Here, we present a technique similar to for updating of orthogonal control functions during elliptic
iteration68. However, our technique does not require explicit specification of grid spacing normal to
the boundary but, employs an interpolation of boundary values to supply the necessary information.
However, unlike69, this interpolation is not constructed in an auxiliary parametric domain, but is
simply the initial algebraic grid constructed using transfinite interpolation.
Although this grid is very likely skewed at the boundary, the first interior coordinate surface is
assumed to be correctly positioned in relation to the boundary, which is enough to give us the
required normal spacing information for iterative calculation of the control functions. Ghost points,
exterior to the boundary, are constructed from the interior coordinate surface, leading to potentially
smoother grids, since central differencing can now be employed at the boundary in the direction
normal to the boundary.
Since our technique does not employ the auxiliary parametric domain of70, theory and
implementation are simpler. The implementation of this technique for the case of volume grids is
63 Sorenson, R.L., A computer program to generate two-dimensional grids about airfoils and other shapes by the
use of Poisson’s equations, NASA TM 81198. NASA Ames Research Center, 1980.
64 Thomas, P.D. and Middlecoff, J.F., Direct control of the grid point distribution in meshes generated by elliptic
Phys. 1995,
67 See Previous.
68 Thompson, J.F., A general three-dimensional elliptic grid generation system on a composite block structure,
Phys. 1995.
70 See Previous.
69
straightforward, and indeed we present an example. We mention here that [Soni]71 presents another
method of constructing an orthogonal grid by deriving spacing information from the initial algebraic
grid. However, unlike our method which uses ghost points at the boundary, this method does not
emphasize capture of grid spacing information at the boundary.
Instead, the algebraic grid influences the grid spacing of the elliptic grid in a uniform way throughout
the domain. With no special treatment of spacing at the boundary, considerable changes in normal
grid spacing can occur during the course of elliptic iteration. This may be unacceptable in applications
where the most numerically challenging physics occurs at the boundaries.
4.3.1.3.1 Boundary Orthogonality for Planar Grids
We assume an initial mapping x ( ξ, η) = (x (ξ , η) , y(ξ , η)) from computational space [0 , m] x [0 , n]
to the bounded physical domain Ω ⊂ R2. Here m , n are positive integers and grid lines are the lines
ξ = i , η = i with 0 ≤ i ≤ m , 0 ≤ j ≤ n being integers. The initial mapping x (ξ , η) is usually obtained
using algebraic grid generation methods such as linear transfinite interpolation. Given the initial
mapping, a general method for constructing curvilinear structured grids is based on partial
differential equations [Thompson et al. ]72. The coordinate functions x(ξ , η) and y(ξ , η) are iteratively
relaxed until they become solutions of the following quasi-linear elliptic system:
The control functions P and Q control the distribution of grid points. Using P = Q = 0 tends to generate
a grid with a uniform spacing. Often, there is a need to concentrate points in a certain area of the grid
such as along particular boundary segments in this case, it is necessary to derive appropriate values
for the control functions. To complete the mathematical specification of system Eq. 4.11, boundary
conditions at the four boundaries must be given. (These are the ξ = 0, ξ = m , η = 0, and η = n or “left,”
“right,” “bottom,” and “top” boundaries.) We assume the orthogonality condition
𝐗 𝝃 . 𝐗 𝜼 = 0 , ξ = 0, m & η = 0 , n
Eq. 4.12
We assume that the initial algebraic grid neither satisfies Eq. 4.11 nor Eq. 4.12. Nevertheless, the
initial grid may possess grid point density information that should be present in the final grid. If the
algebraic grid possesses important grid density information, such as concentration of grid points in
the vicinity of certain boundaries, then it is necessary to invoke “Dirichlet orthogonality” wherein
we use the freedom of specifying the control functions P, Q in such a fashion as to allow satisfaction
of Eq. 4.11, Eq. 4.12 without changing the initial boundary point distribution at all, and without
greatly changing the interior grid point distribution. If, however, the algebraic grid does not possess
relevant grid density information (such as may be the case when the grid is an “interior block” that
does not border any physical boundary), we attempt to solve Eq. 4.11, Eq. 4.12 using the simplest
assumption P = Q = 0. Since we are not using the degrees of freedom afforded by specifying the
control functions, we are forced to allow the boundary points to “slide to allow satisfaction of Eq.
Soni, B.K., Elliptic grid generation system: control functions revisited-I, Appl. Math. Com. 1993.
71
72Thompson, J.F., Warsi, Z.U.A., and Mastin, C.W., Numerical Grid Generation: Foundations and Applications.
North-Holland, New York, 1985.
70
4.11, Eq. 4.12. This is “Neumann orthogonality.” The composite case of having some boundaries
treated using Dirichlet orthogonality, some treated using Neumann orthogonality, and some
boundaries left untreated will be clear after our treatment of the pure Neumann and Dirichlet cases.
4.3.1.3.2 Neumann Orthogonality
As is typical, let us assume that the boundary segments are given to be parametric curves (e.g., B-
splines). If we set the control functions P,Q to zero, then it will be necessary to slide the boundary
nodes along the parametric curves in order to satisfy Eq. 4.11, Eq. 4.12. A standard discretization
of our system is central differencing in the ξ and η directions. The system is then applied to the
interior nodes to solve for xi,j= (xi,j , yi,j) using an iterative method. With regard to the implementation
of boundary conditions, suppose along the boundary segments ξ = 0 and ξ = m the variables x and y
can be expressed in terms of a parameter u as x = x (u ) and y = y (u). For the ξ = 0 and ξ = m
boundaries, let (xη)i,j denote the central difference (1/2)(xi,j+1–xi, j–1) along the boundaries (i = 0 or i =
m). Using one-sided differencing for xξ, Eq. 4.12 is discretized as
(𝐱 η ) . (𝐱 i,j − 𝐱(u0 ))
0,j
u = u0 +
(𝐱 η ) . 𝐱 u (u0 ) Figure 4.8 Change in xξ when Boundary Point is
0,j
Re-Positioned in Neumann Orthogonality
Eq. 4.15
to give xi,j = x(u) along the boundary ξ = 0.
Whereas, substituting Eq. 4.14 in Eq. 4.13 implies that
(𝐱 η ) . (𝐱 m−1,j − 𝐱(u0 ))
m,j
u = u0 +
(𝐱 η ) . 𝐱 u (u0 )
m,j
Eq. 4.16
to give xi,j= x (u) along the boundary ξ = m. Consider next the case where the boundaries are η = 0
and η = n. Orthogonality Eq. 4.12 with central differencing in the ξ direction and one-sided
differencing in the η direction implies
71
boundaries, and it possesses an undesirable concentration of points in the interior of the grid. In fact,
there is folding of the algebraic grid in this central region.
Figure 4.10 shows an elliptically smoothed grid using Neumann orthogonality. The grid is clearly
seen to be smooth, boundary-orthogonal, and no longer folds in the interior. For certain applications,
this grid may be entirely acceptable. However, if the bottom boundary of the grid corresponded to a
physical boundary, then the results of Figure 4.10 might be deemed unacceptable. This is because,
although orthogonality has been established, grid point distribution (both along the boundary and
normal to the boundary) has been significantly altered. In this case, the Dirichlet orthogonality
technique will have to be employed.
Figure 4.10 An Elliptic Planar Mesh on a Bi-Cubic Geometry with Neumann Orthogonality
73 Sorenson, R.L., A computer program to generate two-dimensional grids about airfoils and other shapes by the
use of Poisson’s equations, NASA TM 81198. NASA Ames Research Center, 1980.
74 Sorenson, R.L., Three-dimensional elliptic grid generation about fighter aircraft for zonal finite difference
computations, AIAA-86-0429. AIAA 24th Aerospace Science Conference, Reno, NV, 1986.
75 Thompson, J.F., A general three-dimensional elliptic grid generation system on a composite block structure,
boundary. Instead, our technique automatically derives normal grid spacing data from the initial
algebraic grid. Assuming boundary orthogonality Eq. 4.12, substitution of the inner product of xξ
and xη into Eq. 4.11 yields the following two equations for the control functions on the boundaries:
𝐗 ξ − 𝐗 ξξ 𝐗 ξ − 𝐗 ηη 𝐗 η − 𝐗 ηη 𝐗 η − 𝐗 ξξ
P=− − , Q= − −
g11 g 22 g11 g 22
Eq. 4.19
These control functions are called the orthogonal control functions because they were derived
using orthogonality considerations. They are evaluated at the boundaries and interpolated to the
interior using linear transfinite interpolation. These functions need to be updated at every iteration
during solution of the elliptic system. We now go into detail on how we evaluate the quantities
necessary in order to compute P and Q on the boundary using Eq. 4.19. Suppose we are at the “left”
boundary ξ = 0, but not at the corners (η ≠ 0 and
η ≠ n). The derivatives xη , xηη and the spacing
g22 = ||xη ||2 are determined using centered
difference formulas from the boundary point
distribution and do not change. However, the
g11, xξ , and xξξ terms are not determined by the
boundary distribution. Additional information
amounting to the desired grid spacing normal
to the boundary must be supplied.
A convenient way to infer the normal boundary
spacing from the initial algebraic grid is to
assume that the position of the first interior
grid line off the boundary is correct. Indeed,
near the boundary, it is usually the case that all
that is desired of the elliptic iteration is for it to
swing the intersecting grid lines so that they
Figure 4.11 Projection of Interior Algebraic Mesh
intersect the boundaries orthogonally, without
Point to Orthogonal Position
changing the positions of the grid lines
parallel to the boundary. This is shown
graphically in Figure 4.11, where we see a grid
point, from the first interior grid line, swung
along the grid line to the position where
orthogonality is established. The effect of
forcing all the grid points to swing over in this
fashion would thus be to establish boundary
orthogonality, but still leave the algebraic
interior grid line unchanged. The similarity of
Figure 4.8 and Figure 4.11 seems to indicate
that this process is analogous to, and hence just
as “natural” as, the process of sliding the
boundary points in the Neumann orthogonality
approach with zero control functions.
Unfortunately, this preceding approach entails
the direct specification of the positions of the
first interior layer of grid points off the Figure 4.12 Reflection of Orthogonalized Interior
boundary. This is not permissible for a couple of Mesh Point to Form External Ghost Point
reasons. First, since they are adjacent to two
74
different boundaries, the points x1,1, xm–1,1, x1,n–1, and xm–1,n–1 have contradictory definitions for their
placement. Second, and more importantly, the direct specification of the first layer of interior
boundary points together with the elliptic solution for the positions of the deeper interior grid points
can lead to an undesirable “kinky” transition between the directly placed points and the elliptically
solved for points. (This “kinkiness” is due to the fact that a perfectly smooth boundary-orthogonal
grid will probably exhibit some small degree of nonorthogonality as soon as one leaves the boundary
even as close to the boundary as the first interior line. Hence, forcing the grid points on the first
interior line to be exactly orthogonal to the boundary cannot lead to the smoothest possible
boundary-orthogonal grid.)
Nevertheless, our “natural” approach for deriving grid spacing information from the algebraic grid
can be modified in a simple way, as depicted in Figure 4.12. Here, the orthogonally-placed interior
point is reflected an equal distance across the boundary curve to form a “ghost point.” Repeatedly
done, this procedure in effect forms an “exterior curve” of ghost points that is the reflection of the
first (algebraic) grid line across the boundary curve. The ghost points are computed at the beginning
of the iteration and do not change. They are employed in the calculation of the normal second
derivative xξξ at the boundary and the normal spacing off the boundary; the fixedness of the ghost
points assures that the normal spacing is not lost during the course of iteration, as it sometimes is in
the Neumann orthogonality approach. Conversely, all of the interior grid points are free to change
throughout the course of the iteration, and so smoothness of the grid is not compromised. More
precisely, again at the “left” ξ = 0 boundary, let (xη )0,j denote the centrally differenced derivative
(½)(x0,j+1 – x0,j–1). Let (xoξ )0, j denote the one-sided derivative x1,j – x0,j evaluated on the initial algebraic
grid. Then condition Eq. 4.12 implies that if a is the unit vector normal to the boundary, then
𝐗ξ yη , −xη yη , −xη
𝐚≡ = =
‖𝐗 ξξ ‖ √g 22
√xη2 + yη2
Eq. 4.20
Now the condition from Figure 4.11 is
𝐱 𝝃 = P𝑎 (𝑥𝜉0 )
Eq. 4.21
where Pa= aaT is the orthogonal projection onto the one-dimensional subspace spanned by the unit
vector a. Thus we obtain
𝑦𝜂 , 𝑥𝜂
𝐱 𝝃 = 𝐚(𝐚. 𝑥𝜉0 ) = (y𝜂 𝑥𝜉0 − x𝜂 𝑦𝜉0 )
𝑔22
Eq. 4.22
Finally, the reflection operation of Figure 4.12 implies that the fixed ghost point location should be
given by
𝐱 −1,j = x0,j − (xη )
0,j
Eq. 4.23
This can also be viewed as a first-order Taylor expansion involving the orthogonal derivative (xξ )0, j:
with Δξ = 1. The orthogonal derivative (xξ)0, j is computed in Eq. 4.22 using only data from the
boundary and the algebraic grid. Now in Eq. 4.19 the control function evaluation at the boundary,
the second derivative xξξ is computed using a centered difference approximation involving a ghost
point, a boundary point, and an iteratively updated interior point. The metric coefficient g11
describing spacing normal to the boundary is computed using Eq. 4.22 and is given by
(g11 )0,j = (𝐱 ξ ) . (𝐱 ξ )
0,j 0,j
Eq. 4.25
Finally, note that the value for (xξ )0, j used in Eq. 4.19 is not the fixed value given by Eq. 4.22, but is
the iteratively updated one-sided difference formula given by
( x𝛏 ) = 𝐱 i,j − 𝐱 0,j
𝟎,𝐣
Eq. 4.26
Evaluation of quantities at the ξ = m boundary is similar. Note, however, that the ghost point locations
are given by
𝐱 m+1,j = 𝐱 m,j − (𝐱 𝝃 )
0,j
Eq. 4.27
where (xξ )m, j is evaluated in Eq. 4.22 which is also valid for this boundary. On the “bottom” and “top”
boundaries η = 0 and η = n, it is now the derivatives xη , xηη , and the spacing g11 that are evaluated
using the fixed boundary data using central differences. Using similar reasoning to the “left” and
“right” boundary case, we obtain that for the “bottom” boundary the ghost point location is fixed to
be
(−yξ , xξ )
𝐱 i−1,j = 𝐱 i,0 − (𝐱 𝛈 ) where xη = (−yξ xη0 + xξ yη0 )
i,0 g11
Eq. 4.28
Here, (–yη ,xη ), g11 is evaluated using central differencing of the boundary data, and (xoη, yoη)
represents a one-sided derivative xi,1 – xi,0 evaluated on the initial algebraic grid. The metric
coefficient (g22)i,0 = (xη )i,0 . (xη )i,0 is now computed using Eq. 4.28, and xηη is computed using a ghost
point, a boundary point, and an iteratively updated interior point. The value of (xη )i,0 used in Eq.
4.19 is not the fixed value given in Eq. 4.28, but is the iteratively updated one-sided difference
formula given by
(𝐱 𝜼 )i,0 = 𝐱 i,1 − 𝐱 i,0
Eq. 4.29
Finally, the “upper” η = n boundary is similar, and we note that the ghost-point locations are given by
orthogonality in the regions that are within several grid lines of the corners. One way to do this is to
construct ghost points near the corners with the orthogonal projection operation Eq. 4.21 omitted
(i.e., constructed by simple extrapolation), and to use a blend of these ghost points and the ghost
points derived using the orthogonality assumption. To further ensure that the elliptic system
iterations do not cause grid folding near the boundaries, “guards” may be employed, similar to those
mentioned in the previous section on Neumann orthogonality. In practice, however, we have found
these to be unnecessary for Dirichlet orthogonality.
4.3.1.3.4 Blending of Orthogonal and Initial Control Functions
The orthogonal control functions in the interior of the grid are interpolated from the boundaries
using linear transfinite interpolation and updated during the iterative solution of the elliptic system.
If the initial algebraic grid is to be used only to infer correct spacing at the boundaries, then it is
sufficient to use these orthogonal control functions in the elliptic iteration. However, note that the
orthogonal control functions do not incorporate information from the algebraic grid beyond the first
interior grid line. Thus if it is desired to maintain the entire initial interior point distribution, then at
each iteration the orthogonal control functions must be smoothly blended with control functions that
represent the grid density information in the whole algebraic grid. These latter control functions we
refer to as “initial control function,” and their computation is now described. The elliptic system Eq.
4.11 can be solved simultaneously at each point of the algebraic grid for the two functions P and Q
by solving the following linear system:
g 22 𝐱 𝛏 g11 𝐱 η P R1 R1 = 2g12 𝐱 𝛏𝛈 − 𝐠 𝟐𝟐 𝐱 𝛏𝛏 − 𝐠 𝟏𝟏 𝐱 𝛈𝛈
[g 𝐲
22 ξ g11 𝐲η ] [Q] = [R 2 ] where {
R 2 = R1
Eq. 4.31
Where the derivatives here are represented by central differences, except at the boundaries where
one-sided difference formulas must be used. This produces control functions that will reproduce the
algebraic grid from the elliptic system solution in a single iteration. Thus, evaluation of the control
functions in this manner would be of trivial interest except when these control functions are
smoothed before being used in the elliptic generation system.
This smoothing is done by replacing the control function at each point with the average of the nearest
neighbors along one or more coordinate lines. However, we note that the P control function controls
spacing in the ξ-direction and the Q control function controls spacing in the η-direction. Since it is
desired that grid spacing normal to the boundaries be preserved between the initial algebraic grid
and the elliptically smoothed grid, we cannot allow smoothing of the P control function along ξ-
coordinate lines or smoothing of the Q control function along η-coordinate lines. This leaves us with
the following smoothing iteration where smoothing takes place only along allowed coordinate lines:
1 1
Pi,j = (Pi,j+1 + Pi,j−1 ) , Q i,j = (Q i+1,j + Q i−1,j )
2 2
Eq. 4.32
Smoothing of control functions is done for a small number of iterations. Finally, by blending the
smoothed initial control functions together with orthogonal control functions, we will produce
control functions that will result in preservation of grid density information throughout the grid,
along with boundary orthogonality. An appropriate blending function (bij) for this purpose is
exponential blending function as described before. Now the new blended values of the control
functions are computed as follows:
Figure 4.13 An Elliptic Planar Mesh on a Bi-Cubic Geometry with Dirichlet Orthogonality
We note that if the user for some reason wished to preserve the interior clustering of grid points in
the algebraic grid, then the above scheme given for blending initial control functions with orthogonal
control functions would have to be slightly modified. This is because the fact that the algebraic grid
is actually folded in the interior makes the evaluation of the initial control functions using Eq. 4.31
ill-defined.
This is easily remedied by evaluating the initial control functions using Eq. 4.31 at the boundaries
only using one-sided derivatives, and then defining them over the whole mesh using transfinite
interpolation. Since there is no folding of the algebraic grid at the boundaries, this is well-defined.
(The interpolated initial control functions will reflect the grid density information in the interior of
the initial grid, because the interior grid point distribution of the initial grid was computed using the
78
same process transfinite interpolation of boundary data.) Then we proceed as above, smoothing the
initial control functions and blending them with the orthogonal control functions.
Finally we note that if the algebraic initial grid possesses folding at the boundary, then using data
from the algebraic grid to evaluate either the initial control functions or the orthogonal control
functions at the boundary will not work. In this case, one could reject the algebraic grid entirely and
manually specify grid density information at the boundary. This would however defeat the purpose
of our approach, which is to simplify the grid generation process by reading grid density information
off of the algebraic grid. Instead, we suggest that in this case the geometry be subdivided into patches
sufficiently small so that the algebraic initial grids on these patches do not possess grid folding at the
boundaries.
4.3.1.3.5 Boundary Orthogonality for Surface and Volume Meshes (3D)
Now we turn our attention to applying the same principles of the previous section to the case of
surface grids. Our surface is assumed to be defined as a mapping x(u, v): R2 → R3. The (u, v) space is
the parametric space, which we conveniently take to be [0,1] × [0,1]. The parametric variables are
themselves taken to be functions of the computational variables ξ, η, which live in the usual [0, m] ×
[0, n] domain. Thus
𝐱 = (x, y, z) = (x(u, v), y(u, v), z(u, v)) and (u, v) = [u(ξ, η), v(ξ, η)]
Eq. 4.34
The mapping x(u, v) and its derivatives xu, xv , etc., are assumed to be known and evaluable at
reasonable cost. It is the aim of surface grid generation to provide a “good” mapping (u(ξ, η ), v(ξ, η
)) so that the composite mapping x( u(ξ, η ), v(ξ, η )) has desirable features, such as boundary
orthogonality and an acceptable distribution of grid points. A general method for constructing
curvilinear structured surface grids and its derivates are presented in [Khamayseh ]76 and will not
be repeated here. Interested renders, should consult the above mentioned source, as well as the
[Khamayseh & Mastin]77, [Warsi]78.
4.3.1.4 Stretching Functions
In order to further improve the quality of the mesh, one can introduce univariate stretching functions
to either compress or expand grid lines in order to correct non-uniformity where grid lines are more
or less dense. These functions are arbitrarily chosen and only reflect the distribution of grid lines. We
can derive a new set of equations by combining our previously established differential model for grid
generation and a set of univariate stretching functions of our choice. In order to do so in a
straightforward manner, we can transform our Cartesian coordinates to a new set of coordinates
which exists in a different space, called the parameter space. Then, we define our stretching functions
as onto and one-to-one univariate functions of ξ and η respectively. For additional info, please
consult the work by 79-80-81.
76 Ahmed Khamayseh, Andrew Kuprat, C. Wayne Mastin, “Boundary Orthogonality in Elliptic Grid Generation”,
CRC Press LLC, 1999.
77 Khamayseh, A. and Mastin, C W., Computational conformal mapping for surface grid generation, J. Com. Phys.
1996.
78 Warsi, Z.U.A., Numerical grid generation in arbitrary surfaces through a second-order differential geometric
House, Jordan Hill, Oxford OX2 8DP, 200 Wheeler Rd, Burlington MA 01803, First published 2003.
81 Joe F. Thompson, Z. U. A. Warsi, C. Wayne Mastin, “Numerical Grid Generation – Foundations and Application”,
4.3.1.5 Extension to 3D
If we wished to extend the elliptic solver to 3D, we would need to develop equations for transitioning
from a curvilinear coordinate system (ξ, η, ς) to a 3D Cartesian coordinate system (x, y, z). Using the
same elliptic model as before, we get the 3D version of the Winslow equations where each of the
metric tensor coefficients is determined by taking the cofactors of the contravariant tensor matrix.
The contravariant tensor matrix is used to obtain the coefficients for the Winslow equations, which
are the inverse of the Laplace equations as stated before. In general, if we wish to extend our elliptic
mesh solver to n dimensions, then we will have n sets of equations each with n!/(2(n - 2)! + n terms.
This renders the problem gradually more and more difficult to solve for higher dimensions with the
existing elliptic scheme, implying that a different type of PDE might be needed in these cases. Another
complication that arises in higher dimensions is adjusting grid lines to enforce orthogonality. Using
the aforementioned algorithm for adjusting grid lines to achieve either complete or partial
orthogonality on the boundary, we would need to iteratively solve three sets of two linear equations
for each node in the mesh, as well as solve three trigonometric equations per iteration to compute
the tangents. An excellent short web-cast from Pointwise© (right click here) is available regarding
their development in structured meshing for an axial rotor blade.
4.3.1.6 Mesh Quality Analysis
In order to determine the quality of the resulting mesh, it was necessary to construct an objective
means of quality measurement. Therefore, several statistical procedures were implemented in the
program to produce a meaningful mesh quality analysis report. The metrics which are presented
are divided into the following categories:
➢ Orthogonality Metrics
• Standard deviation of angles
• Mean angle
• Maximum deviation from 90 degrees
• Percentage of angles within x degrees from 90 degrees (x can be set as a constant in
the code)
x ξ y η − x η yξ = I Eq. 4.35
Where I represents the area in physical space for a given area in computational space. The second
82Steger, J.L; Sorenson, R.L (1980). "Use of hyperbolic partial differential equation to generate body fitted
coordinates, Numerical Grid Generation Techniques". NASA conference publication 2166: 463–478.
80
equation links the orthogonality of grid lines at the boundary in physical space which can be written
as
dξ = 0 = ξ x dx + ξ y dy Eq. 4.36
x ξ y η + yξ y η = 0
Eq. 4.37
The problem associated with such system
of equations is the specification of I. Poor
selection of I may lead to shock and
discontinuous propagation of this
information throughout the mesh. While
mesh being orthogonal is generated very
rapidly which comes out as an advantage
with this method. Figure 4.14 displays a
C-O type hyperbolic grid around an HSCT
wing-fuselage configuration, with
pressure contours mapped using an Euler
solution and M∞ = 2.4.
4.3.3 Parabolic Schemes
The solving technique is similar to that of
Figure 4.14 Euler Solution on a HSCT Wing-Fuselage
hyperbolic PDEs by advancing the
solution away from the initial data surface
satisfying the boundary conditions at the end. Nakamura (1982) and Edwards (1985) developed the
basic ideas for parabolic grid generation. The idea uses either of Laplace or the Poisson's equation
and especially treating the parts which controls elliptic behavior. The initial values are given as the
coordinates of the point along the surface η = 0 and the advancing the solutions to the outer surface
of the object satisfying the boundary conditions along ξ edges. The control of the grid spacing has not
been suggested till now. Nakamura and Edwards, grid control was accomplished using non-uniform
spacing. The parabolic grid generation shows an advantage over the hyperbolic grid generation that,
no shocks or discontinuities occur and the grid is relatively smooth. The specifications of initial values
and selection of step size to control the grid points is however time consuming, but these techniques
can be effective when familiarity and experience is gained.
83 J.F. Thompson, B.K. Soni and N.P. Weatherill. Handbook of Grid Generation. CRC Press, 1998.
81
removes the folded grid lines. There are far too many algebraic functional for grid generation
optimization as reader should check with84.
Figure 4.15 Folded Grid by Transfinite Interpolation - Smooth Grid by Winslow Functional
84 Sanjay Kumar Khattri, “Grid Generation and Adaptation by Functional”, Department of Mathematics,
University of Bergen, Norway.
85 Michael Turner, “High-Order Mesh Generation For CFD Solvers”, Imperial College London, Faculty of
Engineering Department of Aeronautics, A thesis submitted for the Degree of Doctor of Philosophy, 2017.
82
• A means of distributing points over the field in an orderly fashion, so that neighbors may be
easily identified and data can be stored and handled efficiently.
• A means of communication between points so that a smooth distribution is maintained as
points shift their position.
• A means of representing continuous functions by discrete values on a collection of points with
sufficient accuracy, and a means for evaluation of the error in this representation.
• A means for communicating the need for a redistribution of points in the light of the error
evaluation, and a means of controlling this redistribution.
Several considerations are involved here, some of which are conflicting. The points must concentrate,
and yet no region can be allowed to become devoid of points. The distribution also must retain a
sufficient degree of smoothness, and the grid must not become too skewed, else the truncation error
will be increased as noted. This means that points must not move independently, but rather each
point must somehow be coupled at least to its neighbors. Also, the grid points must not move too far
or too fast, else oscillations may occur. Finally the solution error, or other driving measure, must be
sensed, and there must be a mechanism for translating this into motion of the grid. The need for a
mutual influence among the points calls to mind either some elliptic system, thinking continuously,
of some sort of attraction (repulsion) between points, thinking discretely. Both approaches have been
taken with some success, and both are discussed below. It should be noted that the use of an
adaptive grid may not necessarily increase the computer time, even though more computations
are necessary, since convergence properties of the solution may be improved, and certainly
fewer points will be required. With the time derivatives at fixed values of the physical coordinates
transformed to time derivatives taken at fixed values of the curvilinear coordinates, no interpolation
is required when the adaptive grid moves. Thus the first derivative transformation given the chain
rule is given by
∂A ∂A ∂x
( ) =( ) + ∇A. ( ) or
∂t ξ,η,ζ ∂t x,y,z ∂t ξ,η,ζ
3
∂A ∂A ∂A ∂xi
( ) =( ) +∑ ( )
∂t ξ,η,ζ ∂t x,y,z ∂xi ⏟∂t
i=1
mesh movement
Eq. 4.38
The computation thus can be done on a fixed grid in the transformed space, without need of
interpolation, even though the grid points are in motion in physical space. The influence of the motion
of the grid points registered through the grid speeds, (xi)t, appearing in the transformed time
86 Joe
F. Thompson, J., F., Warsi, Z., U., A., Mastin, .W. “Numerical Grid Generation; Foundations and Applications”,
North-Holland Book, 1995.
83
derivative. This is the appropriate approach when the grid evolves with the solution at each Time
step. Some methods, however, change the grid only at selected time steps, and here interpolation
must be used to transfer the values from the old grid to the new since the grid movement is not
continuous. A combination of the weight functions given by Eq. 4.38 provides the desired tendency
toward concentration both in regions of high gradient and near extrema. The effect of the inclusion
of the curvature illustrated below:
1/2 2A
A
2
x 2
w = (1 + β 2 K )1 + α 2 where K=
x A 2
3/2
1 +
x
Eq. 4.39
Where α and β are parameters to be specified. Clearly, concentration near high gradients is
emphasized by large values of α , while concentration near extrema (or other regions of large
curvature) is emphasized by large β. (see Figure 4.16).
87Feng Liu, Shanhong Ji, and Guojun Liao,” An Adaptive Grid Method and Its Application to Steady Euler Flow
Calculations”, Siam J. Sci. Comput. C° 1998 Society For Industrial And Applied Mathematics Vol. 20, No. 3.
84
one strong shock wave on the upper side of the airfoil and a weaker one on the lower side. It can be
seen from Figure 4.18 that the computed shock waves are rather thick. Shock waves zero thickness
in the inviscid limit. To get better computational results, particularly to capture the shock waves
more accurately, one would like to concentrate grid points around the shock waves. The deformation
method is applied to get a new grid with prescribed distribution of cell sizes based on gradients of
the flow field. The adaptive criterion here is to detect the shock waves. This suggests choosing the
monitor function f of the form:
1
= C1 (1 + C2P) Eq. 4.40
f
Where P is the pressure and C1and C2 are constants. It can be seen that grid points are clustered
closely in the areas where the two shock waves occur, although grid lines are somewhat skewed in
the clustered regions because the deformation method does not guarantee orthogonality. However,
since our flow solver is based on a finite volume scheme which does not require the use of an
orthogonal grid, we are content with the locally reduced cell sizes.
Figure 4.17 Grid Adaption and Mack Contours for Supersonic Airfoil
Another test case is the supersonic flow over the same airfoil with a free stream Mach number M∞ =
1.5, and α= 0. As can be seen, a strong bow shock wave appears in front of the airfoil leading edge. In
addition, there are two weak shocks emanating from the trailing edge of the airfoil the Mach number
distribution computed on the adapted grid. It can be seen that a sharper front of the bow shock is
captured compared with that on the initial un-adapted grid. The resolution of the two trailing edge
shocks is also slightly increased. The computational time needed for the grid adaptation and the flow
solver for the supersonic case is the same as that for the transonic case. (See Figure 4.17).
85
88 Thacker WC. A brief review of techniques for generating irregular computational grids. Intentional Journal
Numerical Meth Eng. 1980; 15:1335–41.
89 Lo SH. A new mesh generation scheme for arbitrary planar domains. Int J Num. Meth Eng. 1985;21:1403–2
90 Baker TJ. Three dimensional mesh generation by triangulation of arbitrary point sets, AIAA 8th CFD conference.
second international conference on num grid gen comp fluid dyn, Miami, FL, 1988. p. 589–97.
93 YerryMA, Shephard MS., “Automatic three-dimensional mesh generation by the modified Octree technique”,
94 Wikipedia.
88
The use of edge-based data structures results in lower memory overheads and increased
computational throughputs because redundant computations are eliminated and the gather-scatter
required for vectorization in supercomputers is minimized. Furthermore, because sets of edges can
be used as building blocks for arbitrarily shaped elements, hybrid meshes with mixed element types
may be handled by a single edge-based data structure, at least for inviscid flows. For viscous flows,
the Galerkin finite-element discretization of the diffusion terms on simplicial meshes results in a
nearest-neighbor stencil and thus may be implemented using an edge-based data structure95.
However, this is not the case for non-simplicial meshes, since the resulting stencils involve vertices
that are not connected to the center vertex by a mesh edge (such as diagonally opposed vertices in
hexahedral elements). In these situations, the element-based data structure must be retained96. An
alternative is to resort to the thin-layer approximation of the viscous terms on non-simplicial meshes,
which can be implemented exclusively along edges97. The limitations of this approach are obvious,
although it is justifiable for highly stretched prismatic or hexahedral meshes, where stream wise
resolution has been sacrificed for efficiency.
An interesting property of the edge-based data structure is that it can provide an interpretation of
the discrete operator as a sparse matrix. For nearest neighbor stencil discretization, all points in the
stencil are joined to the center point by a mesh edge. The discretization operator can be written as a
sparse matrix, where each nonzero entry in the matrix corresponds to a stencil coefficient or edge of
the mesh. For systems of equations, the edges correspond to nonzero block matrix entries in the large
sparse matrix. This interpretation has implications for the implementation of implicit and algebraic
multigrid solution schemes 98-99. One of the disadvantages of the edge-based data structure is that it
requires a preprocessing operation to extract a unique list of edges from the list of mesh elements
and to compute the associated edge coefficients. For unsteady flows with dynamic meshes, this
preprocessing must be performed every time the mesh is altered, although this may be done locally.
Additionally, for dynamic grid cases, the element data structures are generally required for
performing mesh motion or adaptation, since edge lists represent a lower-level description of the
mesh100.
95 Mavriplis DJ, “Unstructured mesh generation and adaptively”, VKI Lect. Ser. Computational Fluid Dynamics,
26th, VKI-LS 1995.
96 Braaten ME, Connell SD., “Three dimensional unstructured adaptive multigrid scheme for the Navier-Stokes
105(1):83–91, 1993.
99 Mavriplis DJ, Venkatakrishnan V., “A unified multigrid solver for the Navier-Stokes equations on mixed element
term discretization. For simplicial meshes, gradients can first be computed on the mesh elements,
which can then be interpolated to the control-volume faces in order to form second differences at the
mesh vertices. This type of discretization can be derived more formally using a Galerkin finite-
element procedure and assumes linear variation of the flow variables on the mesh elements. The
resulting discretization produces a nearest-neighbor stencil and may be implemented as a single loop
over the edges of the mesh, rather than as a two-pass procedure that computes intermediate cell-
based gradients; [Barth]102 and [Mavriplis]103. For non-simplicial meshes, Galerkin finite-element
formulations using bilinear or trilinear variations on quadrilateral or hexahedral elements can be
constructed. The resulting stencils, however, are no longer compact, as they involve vertices within
mesh elements that are not connected by a mesh edge, such as diagonally opposed vertices in
hexahedral elements [Braaten & Connell104].
An alternative strategy for discretizing viscous terms for vertex-based schemes is to employ the
vertex-based gradients already computed in the context of second-order upwind schemes (using a
Green-Gauss integration around the vertex-based control volumes, for example), instead of the
element-based gradients described above. A vertex-based second difference can then be computed
by integrating these gradients themselves around the control-volume boundaries [Luo et al 1993].
This approach enables the viscous term discretization to be assembled on meshes of arbitrary
element types using the same data structures as required for the upwind convection terms. The
principal drawback of this method is that it results in a large stencil that involves neighbors and next
to-neighbors, which on a structured mesh reduces to a stencil of size 2h, where h represents the mesh
spacing. This not only reduces overall accuracy but is also ineffective at damping odd-even
oscillations in the numerical scheme. For high-Reynolds-number flows of practical interest, the
Reynolds-averaged form of the Navier-Stokes equations is generally employed, which requires the
use of additional turbulence-modeling equation(s). Although algebraic models can be implemented
on unstructured grids [Mavriplis]105, they were conceived for simple wall-bounded flows and are
thus ill suited for flows over complex geometries. The current practice is to employ two-equation
models of the K-ε or K-ω type, or simpler one-equation eddy viscosity models [Baldwin & Barth
1992], [Spalart & Allmaras]106. These equations contain convective, diffusive, and source terms that
can be discretized in a manner analogous to the discretization of the flow equations. Turbulence
modeling equations often result in stiff numerical systems, and care must be taken to devise schemes
that preserve positivity of the turbulence quantities at all stages of the solution process. A common
practice is to discretize the convective terms using a first order upwind strategy, from which a
positive scheme that obeys a maximum principle can be obtained.
102 Barth TJ. 1992. Aspects of unstructured grids and finite-element volume solvers for the Euler and Navier
Stokes equations. Von Karman Inst. Lect. Ser., AGARD Publ. R-787
103 Mavriplis DJ. 1995b. A three-dimensional multigrid Reynolds-averaged Navier-Stokes solver for unstructured
moving front approach), Octree decomposition together with a means of merging, or snapping, the
outermost Octree hexahedra to the boundary, or by making use of the medial axis in 2D, medial
surface in 3D, to produce a type of multi-block decomposition that is more amenable to meshing by
paving108. The possibility of automatically generating unstructured hexahedral meshes is tantalizing
and offers the prospect of providing automated mesh generation suitable for solid mechanics
computations using finite element methods as well as meshes suitable for the computation of the
RANS equations. Much depends on the mesh quality near solid boundaries and it remains to be seen
whether any of the current approaches to hexahedral mesh generation can provide the required
flexibility in terms of geometry handled and the necessary quality in terms of mesh orthogonality
near solid boundaries. Figure 5.2. displays different methodologies which currently available in
mesh generation engines.
Tetrahedral/
Multiblock Advancing
(contiguous) Primatic Front
Cartesian
Multiblock (Hexahedral
Overset +assorted Delany
(Chimera Polyhedral
Fully
unstructured
Hexahedral
triangulation”, Proceedings of the ACM symposium on solid modeling and applications, Salt Lake City, UT, New
York: ACM Press; 1995. p. 201–12.
109 J. T. Hwang and J. R. R. A. Martins, “An unstructured quadrilateral mesh generation algorithm for aircraft
automatically creates unstructured quadrilateral meshes for the full airframe based on just the
description of the desired structural members. The approach consists in representing each node in
the mesh as a linear combination of points on the geometry so that the structural mesh morphs as
the geometry changes, as it would, for example, in aero-structural optimization. The algorithm
divides the aircraft skin into 4-sided domains based on the underlying B-spline representation of the
geometry. It meshes each domain independently using an algorithm based on constrained Delaunay
triangulation, triangle merging and splitting to obtain a quadrilateral mesh, and elliptical smoothing.
Examples of full configuration structural meshes are provided, and a mesh convergence study is
performed to show that element quality can be maintained as the structural mesh is refined. Here,
presented an automatic unstructured quadrilateral mesh generation algorithm for aircraft structures
that uniquely satisfies the four requirements mentioned above. The algorithms starts with a B-spline
surface geometry representation and a list of requested structural members defined in terms of
parametric locations on the surfaces. It then splits the geometry into domains, meshes each domain
independently using Constrained Delaunay triangulation (CDT) as well as merging and splitting
operations, and then applies Laplacian smoothing as a final step.
5.5.1 Geometry Representation
The only requirements on the geometry
representation are that it is continuous
and watertight. Representing the
geometry using untrimmed B-spline
surfaces, though this is not the only choice
with which the structural mesh
generation algorithm would work. B-
splines are piecewise polynomials used
frequently in computer-aided design
because of their favorable mathematical
properties: compact support for a desired
order and smoothness, and flexibility in
terms of the number of control points and
polynomial degree. B-spline surfaces are
tensor products of B-spline curves that
maintain the advantages of smoothness
and sparsity. Figure 5.3 illustrates how a
conventional wing-body-tail aircraft Figure 5.3 Conventional configuration geometry (a),
geometry can be constructed with 4-sided final structural mesh (Courtesy of Hwang & Martins)
B-spline surfaces110.
5.5.2 Local Mesh Generation Algorithm
In general, 2D quad meshing algorithms fall under three general categories: domain-decomposition,
advancing-front, and triangulation-based methods. The first two recursively splitting the domain
through heuristic algorithms and marching out from boundaries, respectively are not suitable for the
current problem because of the line constraints imposed by the structural members intersecting the
skin. Two additional ideas that have been successful are topology clean-up and smoothing. There has
been work dealing with line constraints in structural mesh generation for marine engineering. The
local mesh generation algorithm consists of six stages, as illustrated in Figure 5.4. The figure shows
a domain for illustrative purposes, containing a vertical edge extending from the top to the bottom of
the domain, two diagonal edges intentionally chosen to form a triangular region, and a shorter edge
that is floating by itself near the center of the domain. The six stages are as follows:
1. Initial domain: We start with a 4-sided domain representing a single B-spline surface, with
the internal members intersecting this surface pre-determined.
2. Discretization: We discretize the boundaries and the interior of the domain. The boundaries
are simply discretized using a global parameter representing the requested resolution. This
guarantees that the bounding edges shared by two neighboring domains always agree on the
boundary nodes because all domains use the same resolution parameter. The interior of the
domain is populated with a grid of points that are spaced based on the sizes of the element
boundaries, as measured from the preview mesh.
3. Triangulation: We perform CDT on the domain while respecting the edges from the
boundaries and from the intersecting structural members.
4. Quad-dominant mesh: From the triangulation, we obtain a quad-dominant mesh by ranking
all potential merges of adjacent triangles based on how close the angles would be to 90
degrees. The triangles are merged according to this ranking until no possible merges remain.
5. Fully-quad mesh: We split all quads into four smaller quads and all triangles into three quads
using its centroid to obtain a fully quad mesh.
6. Smoothing: We perform an elliptical smoothing operator (Laplacian) as the last step
Other noticeable sources regarding Quad meshing are by [Remacle et al,]111, and [Verma &
Figure 5.4 The Six Steps of the Unstructured Quad Meshing Algorithm
111J.-F. Remacle, J. Lambrechts, B. Seny, E. Marchandise, A. Johnen and C. Geuzaine, “Blossom-Quad: a non-
uniform quadrilateral mesh generator using a minimum cost perfect matching algorithm”, Int. J. Numerical
Meth. Eng. 2010; 00:1-6
93
Tautges]112.
5.5.3 Conversion of Triangular to Quadrilateral Meshes
Another simple strategy is developing by [Lyra & de Carvalho]113 where quadrilateral mesh
generated from triangular meshes. Unstructured quadrilateral meshes can be automatically
generated in several different ways and do not impose serious topological restrictions on the meshes,
being appropriated to deal with complex geometries, naturally allowing local non-uniform mesh
refinement. Several different approaches have been proposed to generate unstructured quadrilateral
meshes. These methodologies can be divided into two basic groups:
112 Chaman Singh Verma and Tim Tautges, “Jaal: Engineering a high quality all-quadrilateral mesh generator”,
Argonne National Laboratory, Argonne IL, 60439.
113 Paulo Roberto M. Lyra, Darlan Karlo E. de Carvalho, “A Computational Methodology for Automatic Two-
Dimensional Anisotropic Mesh Generation and Adaptation”, Methodology for Automatic Two-Dimensional
Anisotropic Mesh Generation and Adaptation.
94
The conversion of triangular meshes is particularly attractive because these meshes can inherit the
properties of the triangular meshes, whose generators are very well developed and once it is always
possible to build a triangular mesh over any arbitrary 2D domain, quadrilateral meshes can be
constructed as general as the triangular ones. It also allows the use of any triangular mesh generator
as a “black box”. As we generate a quadrilateral mesh using the conversion strategy, the quadrilateral
mesh inherits the characteristics of the initial triangulation. For both, iso and anisotropic meshes this
strategy consists of four main steps, as presented .
The standard strategy of merging triangles into quadrilaterals consists in eliminating a common edge
that belongs to two adjacent triangles. Following the work done by [Xie and Ramaekers (1994)] and
[Alquati and Groehs (1995)], our mesh generator is such that it refrains from merging triangles that
would form a non-convex quadrilateral. Besides, for anisotropic meshes, the merging process will
remove a common edge between two adjacent triangles, only if the two quadrilaterals to be created
satisfy a quality criteria which is controlled by two
geometric parameters. The adopted procedure
generates a quadrilateral mesh with edges that are
approximately half of those of the corresponding
triangular elements and usually this is not a
serious concern, since the user can generate a
coarser initial triangulation to obtain the desired
mesh density. The four steps involved in the
quadrilateral mesh generation can be seen in
Figure 5.5 (1-4).
114Stephen M. Ruffin, NASA Ames Research Center in coordination with Georgia Institute of Technology,
“GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows”, 2003.
95
techniques and multigrid acceleration schemes. The difficulties in using Cartesian grids arise from
the fact that the control volumes adjacent to the surfaces are not usually aligned with the surfaces
and thus special techniques need to be employed to handle the non-Cartesian (cut or split) cells in
these regions. Cut cells are created when the intersection of the Cartesian cell and the solid surface
results in one computational volume with only a fraction of the original volume and possibly non-
Cartesian aligned edges, see Figure 5.7 (a). Split cells are created when the intersection of the
Cartesian cell and the solid surface results in two or more computational volumes which might have
non-Cartesian aligned edges, see Figure 5.7 (b).
Figure 5.7 Solid Surface Over-Layer Cartesian Cell and Resulting Cut and Split Cell – (Courtesy of NASA
Ames)
The original use of Cartesian grids involved solving the 2D full potential equation by [Purvis and
Burkhalter]115, followed shortly afterwards by [Wedan and South]116, in which a non-body-oriented
structured grid was created on which the full potential equation was solved. Their solution strategy
was to use finite volume techniques in order to more easily handle the computational cells that were
intersected by the solid surface. Additionally, they used linear approximations in the cut cells for the
reconstruction of the wall boundary conditions which provided a simple algorithm for
implementation and preserved the structure of their coefficient matrix during the solution iteration
so that no extra computational costs were incurred for the cut cells. However, this did not preserve
the actual body curvature and also only provided a linear approximation to the actual surface lengths
and area for the cut cells, and thus could not exactly model curved surfaces. Also, little mention was
made of any attempts at cell refinement to more accurately capture the surface geometry and flow
features. Earlier, [Clarke et al.]117 used Cartesian grids to solve the two-dimension Euler equations
(again on non-grid aligned surfaces). They attempted to more accurately model the solid surface
boundary conditions by utilizing the local surface curvature in reconstructing the wall boundary
conditions. They also provided more accurate modeling of the cut cell lengths and areas by using the
actual surface geometry in their calculations and not linear approximations. Additionally, they noted
that clustering was needed in certain critical regions in order to produce accurate results, and this
was achieved by clustering entire grid lines. Cut cells that were too small (less than 50% of the
original cell size) were merged with neighbor cells in order to avoid time stepping problems
associated with very small computational cells. [Gaffney and Hassan]118 extended this research to
115 J. W. Purvis and J. E. Burkhalter. Prediction of Critical Mach Number for Store Configurations. AIAA J.,1979.
116 B. Wedan and J. C. South, Jr. A Method for Solving the Transonic Full-Potential Equation for General
Configurations. In AIAA 6th Computational Fluid Dynamics Conference, Danvers, MA, July 1983. AIAA-83-1889.
117 D. K. Clarke, M.D. Salas, and H. A. Hassan. Euler Calculations for Multi element Airfoils Using Cartesian Grids.
Figure 5.9 Semi-Automatic Cartesian Mesh Generation with CFOW – Courtesy of Kawasaki
R. L. Meakin. On Adaptive Refinement and Overset Structured Grids. 13th AIAA, 1997.
119
Hidemasa Yasuda, Taku Nagata, Atsushi Tajima, Akio Ochi, “014 - KHI Contribution to GMGW-1”, Kawasaki
120
Heavy Industries, Ltd. With cooperation of Kawaju Gifu Engineering Co., Ltd., 1st Geometry and Mesh
Generation Workshop Denver, CO June 3-4, 2017.
97
121M. J. Berger and R. J. LeVeque. An Adaptive Cartesian Mesh Algorithm for the Euler Equations in Arbitrary
Geometries. 9th AIAA Computational Fluid Dynamics Conference, Buffalo, NY, June 1989. AIAA-89-1930-CP.
98
flow cells and allowed larger time steps to be taken with the solver remaining stable. Several
researchers have extended [Berger and LeVeque's] research into areas such as multigrid Cartesian
grids, higher accuracy flow solvers using more sophisticated flux approximations, time-accurate
unsteady flows, and a front tracking AMR scheme that attempted to track the discontinuities (such
as shocks) as the solution evolved in order to provide more accuracy in the refined mesh calculations.
According to recent investigation by [Hiroshi Abe ]122, Cartesian grid method fall into two
categories with the demand of accurate solutions. One keeps its structured grid nature and
introduces embedding structured sub grids within the underlying coarse structured grids.
Adaptive Mesh Refinement (AMR) is one of them. Figure 5.12 (a) shows an example of AMR in
two dimension. The intersected cells by a circle in the underlying coarse grids are tagged in blue. The
blue-tagged cells are to be refined. In the AMR procedure, several embedded rectangle patches are
defined so as to contain the blue tagged cells. Then, the embedded rectangle patch areas are refined.
Figure 5.12 Schematic image of Adaptive Mesh Refinement – (Courtesy of Hiroshi Abe)
The other considers the Cartesian mesh as an unstructured collection of h-refined meshes. The data
structure is not the same as structured grids but the same as unstructured grids. Adaptive Cartesian
grid method was introduced as an unstructured Cartesian grid method and has shown the great
success in simulating complex geometries. Figure 5.12 (a) shows a case of two dimensional
adaptive Cartesian grid method. Beginning with a root cell covering whole domain, the intersected
cells by the circle are recursively bisected. This simple procedure finally gives Figure 5.12 (b).
Further examples provided a 2D backward step (see Figure 5.13).
122Hiroshi Abe, “Blocked Adaptive Cartesian Grid FD-TD Method for Electromagnetic Field with Complex
Geometries”, International Conference on Modeling and Simulation Technology, Tokyo, JAPAN, 2011.
99
In order to more accurately determine the appropriate surface forces to add to the momentum
equations, [Fadlun et al.]126 developed a second-order boundary interpolation scheme for three
dimensional incompressible flows by using linear interpolation to reconstruct the state information
at the surface. This approach resulted in the use of larger time steps (CFL around 1.5) and better
accuracy at the surface. Further advances by [Lai and Peskin]127 developed second-order methods
for moving membranes. Additionally, [Kim et al.]128 developed a second-order method with both
momentum and mass sources in order to improve the overall accuracy of their results. While these
123 C. S. Peskin. Numerical Analysis of Blood Flow in the Heart. Journal of Computational Physics, 1977.
124 C. S. Peskin. The Fluid Dynamics of Heart Valves: Experimental, Theoretical, and Computational Methods.
Annual Review of Fluid Mechanics, 14:235-259, 1982.
125 D. Goldstein, R. Handler, and L. Sirovich. Modeling a No-Slip Flow Boundary with an External Force Field.
schemes handle the Navier-Stokes equations on Cartesian grids, they all suffer from numerical
stability problems that typically require numerical diffusion. Also, the surface is not sharply resolved,
and is typically smeared between 2 or 3 cells. This can cause problems when flow details are needed
near the surface.
5.6.2.3 Volume of Fluid Methods
Another approach to solving the Navier-Stokes equations on Cartesian grids is the volume of fluid
method. In this method, a scalar transport equation is solved in addition to the Navier-Stokes
equations. The scalar is a value between 0 and 1 that represents the volume fraction that the fluid (or
gas) occupies in that cell. The typical use of this scheme is free surface flows, where the scalar
represents the amount of the cell that the fluid occupies, and interfacial flows, where the scalar
represents the volume fraction that a species occupies in the cell.
[Hirt and Nichols]129 originally developed this method as part of an incompressible free-surface
Navier-Stokes solver. In order to retain the incompressible invariance in the transport equation,
strict mass conservation was required of the numerical solver. They also used a first order accurate
surface reconstruction technique which causes problems resolving the interface boundaries. The
volume of fluid schemes typically work well when the interface curvature is small with respect to the
surface modeling. Otherwise, artificial discontinuities can develop as well as the inability to resolve
the small scale features at the interfaces. Additionally, without accurate propagation of the scalar
transport equation and sophisticated schemes to resolve the interface boundaries, artificial mixing
can occur.
5.6.2.4 Reconstruction Schemes
Another class of schemes used to solve the Navier-Stokes equations on Cartesian grids are the
reconstruction based schemes. These have been proposed by [Ye et al.]130-131 and [Majumdar et al.]132.
These schemes are all based around the idea of interpolating the state information to the nodes in
the computational domain around the surface. [Ye et al.] have developed a two-dimensional
incompressible Navier-Stokes equation solver. The solver use the cell merging technique to eliminate
any surface cells that are smaller than 50% of their full size. Then, the state information for the faces
of the new cell are found by utilizing a linear-quadratic two-dimensional interpolation from the
surrounding cells. This technique results in a slow convergence of the pressure Poisson equation and
requires acceleration techniques. This technique has been extended to moving boundaries.
[Majumdar et al.] have developed two-dimensional, turbulent Reynolds Averaged Navier-Stokes
solver on uniform Cartesian grids. This solver uses interpolation polynomials in one and two
dimensions to reconstruct the state of the cells that are inside the body. Thus, the solution process is
performed over uniform cells at the surface. The interpolation process can cause numerical
instabilities due to the negative coefficients that can arise with certain interpolation polynomials.
5.6.2.5 Cut Cell Based Methods
[Fryrnier et al.]133 developed the first work in the application of the full Navier-Stokes equations on
Cartesian grids using the cut cell approach. The solution procedure was a straight-forward finite-
129 C. W. Hirt and B. D. Nichols. Volume of Fluid (VOF) Method for Dynamics of Free Boundaries. Journal of
Computational Physics, 39(1):201-221, 1981.
130 T. Ye, R. Mittal, H. S. Udaykumar, and W. Shyy. An Accurate Cartesian Grid Method for Viscous Incompressible
Flows with Complex Immersed Boundaries. Journal of Computational Physics, 156(2):209-240, 1999.
131 T. Ye, R. R. Mittal, H. S. Udaykumar, and W. Shyy. A Cartesian Grid Method for Viscous Incompressible Flows
with Complex Immersed Boundaries. In AIAA 3rd Weakly Ionized Gases Workshop, Norfolk, VA, November 1999.
132 S. Majumdar, G. Iaccarino, and P. Durbin. RANS Solvers with Adaptive Structured Boundary Non-Conforming
Grids. Annual Research Briefs 208782, Center for Turbulence Research, Stanford University, Stanford, CA, 2001.
133 P. D. Fryrnier, Jr., H. A. Hassan, and M.D. Salas. Navier-Stokes Calculations Using Cartesian Grids: I. Laminar
volume approach with the Cartesian grids clustered using grid line. Their results demonstrated
strong dependencies on the smoothness of the surface grid where non-smooth surface grids
produced non-smooth skin-friction and surface pressure values. A large number of standard viscous
flux formulations for cut cell based schemes were analyzed to ascertain their accuracy and positivity
characteristics. These viscous flux formulations fell into two categories:
1. Green-Gauss reconstructions where the divergence theorem was applied to cells neighboring
the face that the flux was being calculated to build the integration path,
2. polynomial based reconstructions that used a Lagrange polynomial and a set of support cells
to interpolate the state variables where they were needed with the polynomial being
differentiated to obtain the needed gradients.
This research focused on the accuracy of the various formulations via a standard Taylor series
approximation analysis and on the positivity of the formulations. The positivity is a measure of how
well the discretization satisfies the local maximum principle that holds for all homogeneous, second
order partial differential equations (PDEs). The local maximum principle simply states that the
solution to a homogeneous, second order PDE at one point is bounded by the values of its neighbors.
It is a statement of the diffusive nature of second order PDEs, and thus it is a necessary requirement
for any discretization of a homogeneous, second order PDE. The results of this effort were that all of
the schemes demonstrated (to some degree) a competition between the accuracy of the scheme and
the viscous stencil positivity for non-uniform cells, i.e. any attempt to improve the
accuracy/positivity adversely effected the resulting positivity/accuracy.
The resulting numerical analysis was performed for low to moderate Reynolds number flows. Cases
where the surface was predominantly aligned with the coordinate directions showed excellent
agreement with theoretical values, but when the body was not aligned with the coordinate directions
(thus, the surface had cut cells of varying volume fractions of the uncut cells) large oscillations
occurred in the results due to the sensitivity of the viscous stencil to the grid smoothness (for both
cut cells and coarse/fine cell interfaces). Another impediment to utilizing this scheme for high
Reynolds number flows was the large number
of control volumes needed to adequately
resolve the viscous regions. Even with AMR
this became prohibitively large for even
moderately complex geometries. In addition
to the viscous flux formulation results, AMR
was applied to Coirier's solution strategies
with a positive effect, but without fully
eliminating the viscous stencil sensitivity on
the cut cell smoothness. Another approach
that was discussed was the use of embedded,
body oriented grids to capture the boundary
layers, but no numerical results were given.
5.6.2.6 Chimera Grid Schemes
The use of a collection of grids to cover the
computational domain is known as chimera
gridding. Typically, a body-oriented
structured grid is used around each
component of the solid surfaces. Each of these Figure 5.14 Example Chimera Grid Near Curved
structured grids are then overlaid onto a Surface (Courtesy of NASA Ames)
background Cartesian mesh. Figure 5.14
102
shows an example of a two-dimensional chimera grid collection around a simple curved surface.
Notice that there is no simple mapping of cells in the body oriented grid and the background
Cartesian grid. This feature is one of the drawbacks to chimera gridding schemes, but it is only a
performance penalty when the grid needs to be generated during initialization and after any AMR
processes.
The development of chimera gridding schemes were not solely founded in the viscous/inviscid
coupling problems, but chimera gridding schemes were applicable to that use. Throughout the
history of chimera gridding there have been a number of motivations for their investigation such as
increasing grid point resolution near solid bodies, overcoming structured gridding issues associated
with modeling complex geometries. [Atta]134 developed one of the first uses of chimera grids for the
full potential equation in two-dimensions using a finite difference formulation. A uniform Cartesian
grid was used for the background grid and a body-fitted 0-type structured grid was used around the
body. The two grids were coupled via boundary information exchanges during the iteration process.
First, the solution around the body fitted grid was converged through an outer iteration using a
Dirichlet boundary condition imposed on the outer boundary. Next, the outer grid was converged
using a Neumann boundary condition on the inner boundary, utilizing the solution information from
the body solution. This information was then used to converge the body fitted grid once again. This
cycle continued until the solution approached steady-state.
This procedure required each grid (body and background) to have at least one complete cell inside
the domain of the other, with the inner grid having an extent of between 1 and 3 chord lengths in all
directions. Significant effort was needed to minimize the overlapping region in order to achieve
optimal performance. [Atta] later extended this methodology to three-dimensions as well as more
complex configurations. [Steger et al.]135 developed a finite-difference chimera grid scheme that
could handle a much larger variety of configurations compared to Atta's work. While limited to two-
dimensions, they presented results for an airfoil-flap, cascading blades, a non-lifting bi-plane and an
inlet with center body configuration. All of these configurations were handled automatically by their
solver with little changes to the standard finite-difference formulations. State variables were
exchanged between grids through interpolations which can cause performance penalties in the
initialization stages when the connectivity is being constructed, but they addressed this by using the
"stencil-walk" search pattern, where the cells that are used for the interpolation of one cell are
assumed to be close to the cells that are needed for the interpolation of that cell's neighbors.
5.6.2.7 Hybrid Grid Schemes
Another approach that was related to the chimera grid approach was the use of unstructured
grids between the body surface and the background Cartesian mesh, as opposed to the
overlaying of these grids. These schemes were usually referred to as hybrid grid techniques.
Figure 5.15 demonstrates an example hybrid grid around a curved surface in two dimensions. One
application of a hybrid scheme known as SPLITFLOW, by Karman136 and enhanced by [Domel and
Karmen]137, used Cartesian grids for the majority of the computational domain, and prismatic grids
to resolve the boundary layers. Standard Cartesian grid cutting techniques were used at the interface
between the prismatic grids and the Cartesian grid. The prismatic cells were grown from the surface
134 E. Atta. Component-Adaptive Grid Interfacing. In 19th Aerospace Sciences Meeting, St. Louis, MO, January
1981. AIAA-81-0382.
135 J. L. Steger, F. C. Dougherty, and J. A. Benek. A Chimera Grid Scheme. InK. N. Ghia and U. Ghia, editors,
Advances in Grid Generation, Presented at the Applied Mechanics, Bioengineering, and Fluids Engineering
Conference, volume 5, pages 59-69. The Fluid Engineering Division, ASME, Houston, TX, June 1983.
136 S. L. Karman, Jr. SPLITFLOW: A 3D Unstructured Cartesian/Prismatic Grid CFD Code for Complex
Geometries. In 33rd Aerospace Sciences Meeting and Exhibit, Reno, NV, January 1995. AIAA. AIAA-95-0343.
137 N. D. Domel and S, T-. Karman, Jr. Splitfow: Progress in 3D CFD with Cartesian Omni-tree Grids for Complex
Geometries. AIAA 38th Aerospace Sciences Meeting & Exhibit, Reno, NV, January 2000. AIAA-2000-1006.
103
triangulation using a marching layers technique. The difficulties was addressed that could arise in
the prismatic-Cartesian technique near convex regions, overlapping regions, and other regions
where the prismatic marching technique needed to be modified to create viable grids.
Other Related Method Similar to the
reconstruction method is the class of
finite element solution techniques called
element-free Galerkin methods.
Originally developed by [Belytschko et
al.]138 for elasticity and heat conduction
problems, it is currently being
investigated for its applicability to fluid
dynamics because of its automated
handling of grid generation. The basic
premise of this method is the use of
polynomial curve fits to approximately
represent the data surrounding the node
of interest. Typically, a least-squares
error minimization is used due to the
larger number of data points surrounding
the node than the number of unknowns in
the curve fit. Most implementations
demonstrate oscillations near sharp
gradients ( especially with higher-order
interpolation functions) with more Figure 5.15 Example Hybrid Grid Near Curved Surface –
research needed to developing effective (Courtesy of NASA Ames)
limiters.
Another scheme related to the reconstruction method that is the grid-less method. This method uses
a cloud of points to reconstruct a polynomial curve fit (similar to the element-free Galerkin method)
using a least-squares error minimization. These curve fits are then used to calculate the derivatives
required to solve the Navier-Stokes equations in differential form. The number of calculations per
node is higher than for other techniques due to the large number of least-squares fits that are
required. Unfortunately, this scheme does is not conservative and requires numerical dissipation in
order to obtain a solution. Other researchers have extended this work, but without addressing the
conservation problem.
5.6.2.7.1 Composite Grid Approach
Composite grid generation approach is based on meshing of given arbitrary domain by geometric
union of lower level grids built in more primitive domains. Advantages of such approach are the
simplicity of meshing domains with complicated geometry and convenient definition of appropriate
mesh refinement. Furthermore, resulting grid is partly structured and this feature can be utilized
for building robust numerical solution schemes. The methodology includes three basic steps:
1. constructing structured prototype grids,
2. mapping these grids to non-regular geometry (if necessary) and
3. final superposition of low level grids into the final one.
The procedure are outlined in Figure 5.16 and discussed in details139.
138 T. Belytschko, Y. Y. Lu, and L. Gu. Element-Free Galerkin Methods. International Journal for Numerical
Methods in Engineering, 37(2):229-256, January 1994.
139 E I Kalinin1, A B Mazo1 and S A Isaev, “Composite mesh generator for CFD problems”, 11th International
Conference on "Mesh methods for boundary-value problems and applications" IOP Publishing, IOP Conf. Series:
Materials Science and Engineering 158 (2016) 012047 doi:10.1088/1757-899X/158/1/012047.
104
Figure 5.16 Basic Superposition Example – (Courtesy of Kalinin, Mazo and Isaev)
5.6.3 Discussion
It is generally accepted that a boundary conforming mesh is desirable to achieve accurate solutions
from any numerical solver. If one is willing to sacrifice this requirement then mesh generation
becomes a much simpler task. No approach beats regular structured grids in terms of efficiency
and accuracy. Thus, there have been a number of efforts to use such grids for complex geometries
which are called Cartesian grid approach. An early example of a non-aligned Cartesian mesh can
be found in the work of [Carlson]140. Difficulties arise at the boundary where the Cartesian mesh
intersects the boundary surface. Although finite difference methods can be derived to interpolate
Carlson LA. Transonic Airfoil Analysis and Design Using Cartesian Coordinates. AIAA 2nd computational fluid
140
the boundary conditions onto the nearest mesh points, it is difficult to ensure solution accuracy. If
extra points are inserted, however, where mesh lines intersect the surface then it is possible to create
a boundary conforming mesh. In this respect, boundary conforming Cartesian methods are seen to
be closely related to the Octree based triangulation methods. In fact, the elements obtained from the
Octree and its intersection with the boundaries is precisely the elements that make up the Cartesian
mesh. Conversely, any Cartesian mesh can be converted into an Octree type triangulation by splitting
all elements into tetrahedral (or triangles in 2D).
Most of the elements in a Cartesian mesh will be hexahedra although the elements adjacent to the
surface can be expected to assume a variety of polyhedral shapes depending on the way in which an
Octree hexahedron intersects any given region of the boundary surface. A Cartesian mesh is
therefore well suited for use by a finite volume or finite element method that can accept arbitrarily
shaped elements. This approach has been developed extensively by [Aftosmis et al.]141. Given the
close affinity between Cartesian meshes and Octree based triangulations it is to be expected that they
share the same advantages and limitations. In particular, the problems of correctly finding the
intersection between the Cartesian/Octree mesh and the boundary surface, identifying the element
shapes for the intersected Cartesian cells and adequately refining the mesh near small boundary
features, are substantial. Cartesian mesh methods also suffer from the drawback that the surface
discretization is not known beforehand and it is therefore often difficult to ensure good surface mesh
quality. On the plus side, since the surface discretization is a by-product of the volume discretization,
it is possible to generate meshes around highly complex geometries without the need for carefully
crafted surface meshes. In fact, the surface definition can be obtained directly from the CAD
description provided there is a utility to determine the intersection of given line with the surface.
Cartesian and Octree based mesh generation methods thus circumvent the need for the prior
creation of a surface mesh, a significant advantage if a fast turnaround time in going from design
prototype to flow solution is desired. Figure 5.17 shows a Cartesian grid on a generic airplane
configuration.
Figure 5.17 Example of Cartesian Grid on a Generic Airplane – (Source: Richard Smith 1996)
141 Aftosmis MJ, Berger MJ, Melton JE. ,”Robust and efficient Cartesian mesh generation for component-based
Geometry”, AIAA J 1998; 36:952–60.
106
a b
The size of the individual Octree components and hence the size of the tetrahedral elements in the
near field can be tailored to match the variation in surface curvature. But the quality of elements
adjacent to the boundary surface and likewise the quality of the surface triangulation can be very
poor. This can be a considerable handicap since an accurate implementation of the boundary
conditions often requires a good quality mesh near the boundary. For high Reynolds number
Navier Stokes computations, which must capture the flow details inside thin boundary layers,
the lack of a good quality mesh near a boundary causes considerable difficulties. One way to
alleviate these problems is to build a good quality mesh in the near field by extrusion of hexahedra,
prisms or tetrahedral off the boundary surface and then merge this extruded mesh with an Octree
based mesh at a position that is some way off the boundary 142,143. It is best when creating an octree
mesh to do the following:
142 Karman SL, “SPLITFLOW: a 3-D unstructured Cartesian/prismatic grid CFD code for complex geometries”,
AIAA 33rd aerospace sciences meeting, Reno, NV. AIAA paper 95-0853, 1995.
143 Shaw JA, Stokes S, Lucking MA, “The rapid and robust generation of efficient hybrid grids for rans simulations
over complete aircraft”, International Journal Numeric Method Fluids 2003; 43:785–820.
107
144 Lo¨ıc Mar´echal, “Mesh Generation: Handling Sharp Features”, Gamma project, I.N.R.I.A., Rocquencourt,
78153 Le Chesnay, France.
145 Atta E. Component–adaptive grid interfacing. AIAA19th aerospace sciences meeting. AIAA paper 81-0382.
146 Benek JA, Buning PG, Steger JL. “A 3-D Chimera grid embedding technique”. AIAA 7th CFD conference,
need to create block interfaces and this advantage has facilitated the early application of the overset
approach to complicated geometries. Another advantage of permitting such a loose connection
between neighboring meshes is the possibility of treating moving body problems (e.g. store
separation). The penalty for these advantages lies in the need to transfer information between
neighboring meshes. This requires a means of determining an appropriate overlap region and the
development of interpolation formulae to ensure accurate data transfer.
Overset meshes are also known as Chimera or overlapping meshes. An overset mesh typically
containing a body of interest such as a boat or a gear, superimposed on a background mesh
containing the surrounding
geometry. The data is interpolated
between them148. This approach
allows complex motion and moving
parts to be easily set up and
simulated. Overset meshes typically
involve a background mesh
adapted to the environment and
one or more overset grids attached
to bodies, overlapping with the
background mesh. Multiple
overlapping overset regions are
also possible, expanding the
potential applications of this
technology. Data interpolation
occurs between the grids, which
can move with respect to one
another. They are most useful in
simulating multiple or moving
bodies, as well as parametric Figure 5.21 Two Counter-Rotating Objects Embedded in Two
studies and optimization analyses. Overset Regions with Background Mesh – (Courtesy of Siemens)
By allowing the overset body to
move and also be replaced as many times as needed with different geometry, this technology truly
brings multidisciplinary design exploration to the fingertips of engineers and designers. (see Figure
5.21).
149 Thacker WC. A brief review of techniques for generating irregular computational grids. Intentional Journal
Numerical Meth Eng. 1980; 15:1335–41.
150 Lo SH. A new mesh generation scheme for arbitrary planar domains. Int J Numer Meth Eng 1985;21:1403–2
151 Baker TJ. Three dimensional mesh generation by triangulation of arbitrary point sets, AIAA 8th CFD
second international conference on num grid gen comp fluid dyn, Miami, FL, 1988. p. 589–97.
154 YerryMA, Shephard MS., “Automatic three-dimensional mesh generation by the modified Octree technique”,
remaining region of space can have an extremely complicated shape which may not yield to an
acceptable covering by tetrahedral elements, thus preventing the volume triangulation from filling
the entire region to be meshed.
The basic methodology is based on action performed for a certain boundary image as described by
geometric rules or tests. These rules (2D and 3D) are to optimize the shape of the new element in
the advancing front method. Each methodology depends on these rule, its complexity, and how they
been applied. Therefore, the algorithm has to check the rules stored in data structures. The code
complexity is independent of the number of rules. The algorithm is complicated, but well defined and
can be, at least theoretically, implemented failsafe. Especially in 3D, the choice of the concrete rules
is based on heuristics, which is put into an easily maintainable rule description data-base155.
6.1.1 Advancing Front Triangular Mesh Generator
The original advancing front algorithm has been developed over time into a family of programs which
are very reliable and flexible for an easy incorporation of mesh adaptation156. The advancing front
mesh generator can be described as .
The computational domain is modeled through the use of cubic splines which are defined by some
control points. Close to singularities extra care must be taken in the definition of these points in order
to avoid failure (Thompson et al., 1999). As
a “pre-processing” stage, before the mesh
generation begins, we must first build an
initial and very coarse triangular
background mesh that covers the whole
domain. This coarser mesh is used only to
provide a piecewise linear spatial
distribution of the nodal parameters over
the mesh to be constructed. Typically,
elements of the generated mesh will have
a projected length of δ2 in the direction
parallel to α2 a and a projected length of St
δ2 in the direction normal to α2 a (see
Figure 6.2), with St being the stretching
factor. During the generation process, the
local values of these parameters will be
obtained by a linear interpolation over the
triangles of the background mesh. Figure 6.2 Mesh Parameters
The boundary of the domain is
represented by the union of boundary segments forming closed loops. External boundaries are
155 Joachim Sch¨ober, “NETGEN An advancing front 2D/3D-mesh generator based on abstract rules”, Computing
and Visualization in Science, 1:41–52 (1997).
156 Paulo Roberto M. Lyra, Darlan Karlo E. de Carvalho, “A Computational Methodology for Automatic Two-
Dimensional Anisotropic Mesh Generation and Adaptation”, Methodology for Automatic Two-Dimensional
Anisotropic Mesh Generation and Adaptation.
111
defined in an anti-clockwise fashion while inner boundaries are set in a clockwise manner. As
described previously, the generation of a triangular mesh by the advancing front technique begins by
the discretization of the boundary of the domain. New points are created according to the mesh
parameters which are interpolated from those of the background mesh. At the beginning of the
process, the generation front is made by a set of linear segments connecting the boundary nodes.
With the initial front defined, one segment is chosen and, in general, a triangle is created through the
insertion of an internal node or by simply connecting existing nodes. New triangles are built following
the same procedure. During the process any segment available to build a new triangle is set as
“active” and the others which are set as “non-active” are removed from the generation front.
Therefore the boundary segments are not modified during the mesh generation. The procedure
continues until the whole domain is discretized. When solving problems which develop some
essentially one dimensional features at certain regions (e.g. boundary layer, shocks, etc.) it is not very
efficient to use uniform isotropic meshes. In these cases, it is important to have the
possibility to define a direction and a stretching factor for the elements close to such regions. At least
for linear triangular elements, the use of anisotropic meshes can be extremely important in terms of
computational effort and accuracy. To generate an anisotropic triangulation of the desired domain,
it is used a transformation T which is a function of the mesh parameters, i.e. αi , i = 1, 2. This
transformation157, is given by,
N
1
T(αi , δi ) = (αi α i ) Eq. 6.1
i =1 δ i
where x denotes the tensor product of two vectors and N is the number of dimensions, here, N = 2.
The effect of this transformation is to map the physical domain into a normalized domain, where a
mesh is generated in which the elements are approximately equilateral with unit average size.
Applying the inverse of this transformation T-1,
we end up with a directional stretched mesh
dictated by the mesh parameters, which are
defined either by the analyst or by the mesh
adaptive procedure. This mesh generator
provides an accurate geometric modeling and
high quality meshes, where the high level of
control of the distribution of local mesh
parameters eases the incorporation of mesh
adaptation strategies. The quality of the meshes is
strongly influenced by the mesh optimization
stage. A specific mesh improvement strategy for
highly anisotropic meshes and the definition of an
adequate sequence of mesh enhancement
procedures are incorporated into the code.
Several other modifications have been introduced
in the original code in order to incorporate the
flexibility to deal with predefined multi-domains
and automatically defined sub-regions, to build Figure 6.3 Surface Mesh of SGI Logo
boundary layer meshes, to make possible
generating quadrilateral and mixed meshes and
157Peiró, J., Peraire, J. and Morgan, K., 1994, “{FELISA SYSTEM}: Reference Manual Part1 - Basic Theory”,
University of Wales Swansea Report CR/821/94.
112
➢ Check for Special Cases. Before proceeding to construct a quadrilateral from the current
front, several special case scenarios are checked. These include situations where large
transitions or small angles exist local to the front. In these cases a seam, or transition seam
operation is performed.
158Steven J. Owen, Matthew. Staten, Scott A. Canann and Sunil Saigal, “Advancing Front Quadrilateral Meshing
Using Triangle Transformations”, Conference Paper · January 1998.
113
➢ Side Edge Definition. Using the front edge as the initial base edge of the quadrilateral, side
edges are defined. Side edges may be defined by using an existing edge in the initial triangle
mesh, by swapping the diagonal of adjacent triangles, or by splitting triangles to create a new
edge. In Figure 6.5 (b), side edge NB-NC shows the use of an existing edge, while the side
edge NA-ND was formed from a local swap operation.
➢ Top Edge Recovery. The final edge on the quadrilateral is created by an edge recovery
process. During this process, the local triangulation is modified by using local edge swaps to
enforce an edge between the two nodes at the ends of the two side edges. Edge NC-ND in
Figure 6.5 (c) was formed from a single swap operation. Any number of swaps may be
required to form the top edge.
➢ Quadrilateral Formation. Merging any triangles bounded by the front edge and the newly
created side edges and top edge as shown in Figure 6.5 (d) forms the final quadrilateral.
➢ Local Smoothing. The mesh is smoothed locally to improve both quadrilateral and triangle
element quality as shown in Figure 6.5 (e).
➢ Local Front Reclassification. The front is advanced by removing edges from the front that
have two quadrilateral adjacencies and adding edges to the front that have one triangle and
one quadrilateral adjacency. New front edges are classified by state. Existing fronts that may
have been adjusted in the smoothing process are reclassified.
Front edge processing continues until all edges on the front have been depleted, in which case an all
quadrilateral mesh will remain, assuming an even number of initial front edges. When an odd number
of boundary intervals is provided, a single triangle must be generated, usually towards the interior
of the mesh.
Figure 6.5 Steps demonstrating process of generating a quadrilateral from Front NA-NB - (Courtesy of
Owen et al.)
114
Figure 6.9 and compares Q-Morph against Lee’s (1994) quad meshing algorithm, which uses an
indirect method, coupled with an advancing front scheme to combine triangles into quadrilaterals.
115
The toroidal surface of Figure 6.7 is composed of four surface patches represented as rational B-
Splines. Q Morph utilizes projection and geometric evaluation routines as part of the local and final
smoothing procedures to maintain nodal locations on the three-dimensional surface. Both Figure
6.7 (a) and (b) were generated using the same initial triangle mesh as well as the same cleanup and
smoothing procedures. Despite using an advancing front scheme, Lee’s algorithm shown in Figure
6.7 (b), has difficulty maintaining well-aligned rows of elements introducing many irregular internal
nodes.
Figure 6.7 Results of Q-Morph Compared with Lee’s (1994) Advancing Front Indirect
Method on Toroidal Surface- (Courtesy of Owen et al.)
Figure 6.9 further illustrates the ability of the Q-Morph algorithm to generate well-aligned rows of
elements parallel to a complex domain boundary, while still maintaining the required element size
transitions. Figure 6.8 demonstrates the use of Q-Morph with a planar surface requiring a high
degree of transition. Figure 6.8 (a) shows the partially completed quad mesh with two layers of
quads placed. Figure 6.8 (b) shows the same area after final cleanup and smoothing. In order to
maintain a specified nodal density near the top of the area, a sizing function (Owen,1997) was used
during the triangle meshing process. The algorithm’s ability to maintain the desired mesh density
while still enforcing well-aligned rows of elements transitioning quickly to larger size elements is
demonstrated in this example. For further and complete analysis, please consult the work by [Owen
et al.]159.
159Steven J. Owen, Matthew. Staten, Scott A. Canann and Sunil Saigal, “Advancing Front Quadrilateral Meshing
Using Triangle Transformations”, Conference Paper · January 1998.
116
Figure 6.9 Comparison of Q-Morph with Lee’s Algorithm Illustrating Element Boundary
Alignment - (Courtesy of Owen et al.)
Figure 6.8 Large Transition Mesh for CFD Application - (Courtesy of Owen et al.)
117
6.1.2.6 Conclusion
The Q-Morph algorithm is an indirect quadrilateral meshing algorithm that utilizes an advancing
front approach to transform triangles into quadrilaterals. It generates an all-quadrilateral mesh,
provided the number of intervals on the boundary is even. The resulting mesh has few irregular
internal nodes and produces elements whose contours, in general, follow the boundary of the
domain. Overall element quality is excellent. The Q-Morph algorithm borrows many of its techniques
from the paving method (Blacker,1991; Cass,1996) but adapts them for use as an indirect method,
operating on an existing set of triangles. In so doing, it is able to improve upon the paving technique
by resolving some of its inherent difficulties. The intersection problem, common to most direct
methods of advancing front meshing, is eliminated by relying on the topology of the initial triangle
mesh to close opposing fronts. Improvements also include facility for handling individual element
placement through the use of states for classifying front edges. Facility for handling transition in
element sizes has also been addressed through the use of sizing information provided by the initial
triangle mesh and the definition of specific transformations that enable improved mesh transitions.
Additionally, the initial triangle mesh provides information that reduces the cost of direct evaluations
on three dimensional surface geometry.
160 Delaunay, Boris (1934). "Sur la sphère vide". Bulletin de l'Académie des Sciences de l'URSS, Classe des
sciences mathématiques et naturelles. 6: 793–800.
161 From Wikipedia, the free encyclopedia.
162 Ashwin Nanjappa, “Delaunay Triangulation In R3 on The Gpu”, A Thesis Submitted For The Degree of Doctor
• It belongs to only one tetrahedron and therefore belongs to the boundary of the convex hull
• It belongs to two tetrahedral abcd and abce, and e lies on the exterior of the circum-sphere of
abcd.
As shown in Figure 6.12 when A, B and C are sorted in a counterclockwise order, this determinant
is positive if and only if D lies inside the circumcircle. The majority of Delaunay based methods exploit
an incremental algorithm that starts with an initial triangulation of just a few points. The complete
triangulation is generated by introducing points and locally reconstructing the triangulation after
each point insertion. A particularly attractive feature of this approach is the opportunity to place new
points at specified locations with the aim of
retaining, or possibly improving, the quality
of the mesh163. The main difficulty is the
need to ensure surface integrity. Most
methods allow the boundary points to be
inserted into the volume triangulation
unchecked, reestablishing the surface
edges and faces by a series of edge/face
swaps and the occasional introduction of an
extra point. The left hand side of Figure
Figure 6.12 Two-Three Tetrahedral swap
6.12 illustrates a simplified complex
formed by two tetrahedral which share a
common face. If this face is removed and an edge is inserted
connecting the vertices A and B one obtains three
tetrahedral (shown on the right hand side) which occupy
the same region of space as the two original tetrahedral. This
so-called “2 to 3” swap can often be used to establish a
boundary edge; the reverse operation can similarly be
applied to establish a boundary face. Not all boundary edges
and faces can be established by this one operation but other
more complicated swapping operations are possible. When
the boundary triangulation has been established within the
initial volume mesh, additional points are then inserted into
the triangulation in order to create a volume mesh of well-
shaped tetrahedral. (see Figure 6.11 and Figure 6.13). A
detailed description of this process is given in the books by Figure 6.11 Robust and Fast way to
[George and Borouchaki] and by [Frey and George]164. Detect if point D lies in the
Circumcircle of A, B, C
6.2.3 Advantages
The important advantage of triangulation techniques is the higher degree of automation that is
achieved in the meshing process. It can be shown, for example, that a Delaunay mesh can be
generated to conform to any prescribed boundary in 2D165. The situation in 3D is much more
complicated and no similar mathematical guarantee exists. The method has, nevertheless, been
brought to a high level of automation and current tetrahedral mesh generators will reliably create
good quality isotropic meshes if they are provided with a good quality surface triangulation166-167.
Delaunay triangulation is a concept that extends back well before the emergence of mesh
generation168. Together with its geometric dual, the Voronoı diagram, it has proved to be a fertile
construct whose applications extend from cartography to crystallography. In the seventies it
attracted the attention of computer scientists and quickly became an important topic within the then
emerging discipline that is now known as computational geometry169. In the early nineties computer
scientists rediscovered mesh generation as an application of Delaunay triangulation although
computer graphics and animation was, and still remains, the main justification for their research into
triangulation problems and Delaunay triangulation.
Figure 6.13 Delaunay Triangulation (white) and Voronoi Diagram (blue) – Courtesy of [Labbe])
168 DelaunayB. Sur la sphe`re vide, Izvestia Akademia Nauk SSSR, VII Seria. Otdelenie Matematicheskii
Estestvennyka Nauk 1934; 7:793–800.
169 Preparata FP, Shamos MI. Computational geometry. Berlin: Springer; 1985.
170 Chew, L. P. Constrained Delaunay triangulations. (1989).
171 Ruppert, J. A new and simple algorithm for quality 2-dimensional mesh generation. SODA (1993).
172 Bowyer, A. Computing dirichlet tessellations. The Computer Journal 24, 2 (1981).
173 Watson, D. F. Computing the n-dimensional Delaunay tessellation with application to Voronoi polytopes. The
In the context of anisotropy, the generation of points was also originally done independently from
the construction of the triangulation, using for example anisotropic quad trees. [Mavriplis]174-175 first
considered the idea of stretched Delaunay methods and using nodes generated from an anisotropic
advancing front technique; the connectivity is set by first constructing a large isotropic mesh and
then inserting vertices with the Bowyer-Watson algorithm adapted to the anisotropic setting. The
stretching of the space is obtained by computing gradients of the solution. Good results were
achieved, but the swapping techniques employed do not extend nicely to higher-dimensional
settings. [Borouchaki et al.]176 formalized the approach of stretching spaces of [Mavriplis] through
the use of Riemannian metric tensors and introduced the anisotropic Delaunay kernel, their
anisotropic version of an anisotropic Bowyer Watson algorithm. Along with this new insertion
algorithm, they introduced a Delaunay refinement algorithm based on edge swapping, merging and
splitting techniques to generate meshes whose edges lengths are close to 1 in the metric at each of
their endpoints. Many developments have sprouted from this approach: 3D mesh generation,
periodic anisotropic mesh generation, metric-orthogonal mesh generation .
While these algorithms produce good results and have seen much use in the context of computational
fluid dynamics, theoretical results are limited for these algorithms and there are no guarantees on
either the termination or the robustness of algorithms, nor on the quality of the elements produced
by these techniques. A theoretically sound approach to anisotropic Delaunay triangulations was
proposed by [Boissonnat et al.]177, who introduced the framework of locally uniform anisotropic
meshes. In their algorithm, the star of each vertex v is composed of simplices that are Delaunay for
the metric at v. Each star is built independently and the stars are stitched together in the hope of
creating an anisotropic mesh. The star structure was first introduced by [Shewchuk]178 to handle
moving vertices in finite element meshes and considering stretched stars was first proposed by
[Schoen]. Two stretched stars may be combinatorically incompatible, a configuration called an
inconsistency. [Boissonnat et al.] proved that inconsistencies can be resolved by inserting Steiner
points, yielding an anisotropic triangulation. The algorithm works in any dimension, can handle
complex geometries and provides guarantees on the quality of the simplices of the triangulation.
6.2.5 Voronoi Diagrams
The well-known duality between the Euclidean Voronoi diagram and its associated Delaunay
triangulation has inspired authors to compute anisotropic Voronoi diagrams, with the hope of
obtaining a dual anisotropic triangulation. The approaches of[ Labelle and Shewchuk]179 and [Du and
Wang]180 aim at approximating the geodesic distance between a seed and a point of the domain by
considering that the metric is constant and equal to the metric at the seed (in the case of [Labelle and
Shewchuk]181) or at the point (in the case of [Du and Wang ]). Contrary to the isotropic setting, the
dual of an anisotropic Voronoi diagram is not necessarily a triangulation and inverted elements can
174 Mavriplis, D. J. Adaptive mesh generation for viscous flows using triangulation. Journal of computational
Physics 90, 2 (1990), 271–291.
175 Mavriplis, D. J. Unstructured mesh generation and adaptivity. Tech. rep., DTIC Document, 1995.
176 Borouchaki, H., George, P. L., Hecht, F., Laug, P., and Saltel, E. Delaunay mesh generation governed by metric
specifications. part I algorithms. Finite Elem. Anal. Des. 25, 1-2 (1997), 61–83.
177 Boissonnat, J.-D., Wormser, C., and Yvinec, M. Locally uniform anisotropic meshing. Proceedings of the
of the 21st annual symposium on Computational geometry (New York, NY, USA, 2005.
179 Labelle, F., and Shewchuk, J. R. Anisotropic Voronoi diagrams and guaranteed-quality anisotropic mesh
generation. Proceedings of the 19th annual symposium on Computational geometry (New York, NY, USA, 2003,
180 Du, Q., and Wang, D. Anisotropic centroidal Voronoi tessellations and their applications. SIAM Journal on
generation. Proceedings of the 19th annual symposium on Computational geometry (New York, NY, USA, 2003).
122
be present in the dual triangulation. The algorithms were initially introduced for two-dimensional
(Labelle and Shewchuk) and surface (Du and Wang) domains and have since then been studied and
extended by various authors. The approach of Labelle and Shewchuk was shown to be theoretically
sound in 2D, but the approach of the proof does not extend to higher dimensions. This result was
extended to surfaces by [Cheng et al.]182 by locally approximating the surface with a plane and then
using a density argument similar to the proof of [Canas and Gortler]. Centroid Voronoi tessellations,
which are Voronoi diagrams for which the seeds are the centers of mass of their associated Voronoi
cell, are known to create elements of good quality. The famous Lloyd algorithm iteratively moves the
seeds to the center of mass of their respective cell and recomputed the Voronoi diagram of this new
seed set. This algorithm was modified to be used in the anisotropic Voronoi diagram of Du and Wang,
but the process is computationally expensive.
6.2.6 Restricted Delaunay Triangulation
The Delaunay and Voronoi structures presented so far are built from (almost) arbitrary point sets
living in Rn. It is possible to employ these structures to approximate bounded domains. The
restriction of a Delaunay complex to a domain is the subcomplex (the restricted Delaunay complex)
composed of the simplices of whose dual Voronoi face intersect. Restricted Delaunay triangulations
were introduced by [Chew] and allow to accurately capture complex geometric objects. For example,
it can be shown that under the condition of good sampling of a surface, the restricted Delaunay
triangulation and the domain are homeomorphic. Thanks to these good properties, restricted
Delaunay triangulations have often been used to create provably correct refinement algorithms in
the case of surfaces. It was however proven by [Boissonnat, Guibas and Oudot]183 that this does not
extended to higher-dimensional settings. Nevertheless, restricted Delaunay triangulations will be
consistently used in the refinement algorithms considered. Figure 6.14 represents 2D Delaunay
triangulation of a set of vertices (black) restricted to a curve (blue). Voronoi edges are represented
in teal, and in pink if they intersect the curve. The Voronoi vertices are marked with orange circles.
Restricted Delaunay edges are drawn in yellow, and restricted Delaunay triangles are drawn in green.
Figure 6.14 2D Delaunay Triangulation of a Set of Vertices (Black) Restricted to a Curve (Blue)
182 Cheng, S.-W., Dey, T. K., Ramos, E. A., and Wenger, R. Anisotropic surface meshing. Proceedings of the
seventeenth annual ACM-SIAM symposium on Discrete algorithm (2006).
183 Boissonnat, J.-D., Guibas, L. J., and Oudot, S. Y. Manifold reconstruction in arbitrary dimensions using witness
184 W. Oaks, S. Paoletti, “Polyhedral Mesh Generation”, adapco Ltd, 60 Broadhollow Road, Melville 11747 New
York, USA.
185 See Previous.
186 Hrvoje Jasak, ˇZeljko Tukovi´c, “Automatic Mesh Motion for the Unstructured Finite Volume Method”, ISSN
(a) typical polyhedral cells (b) Cell Decompostion (c) Cell and Face Decompostion
and Engineering, 2007. ISVD '07. 4th International Symposium on, pages 117{129, July 2007.
189 Paul-Louis George and Houman Borouchaki, “Delaunay Triangulation and Meshing - Application to Finite
vertices.
In 3D space, an equivalent mesh generation method would require a tetrahedral primal mesh that
complies with the Delaunay criterion. Delaunay partitioning is known to maximize the minimum
angle of all formed simples, which leads to well-conditioned tetrahedral. However, in order to obtain
a valid dual mesh, a far stricter criterion needs to be filled: that of well centered tetrahedral, meaning
that the circumcenter of a primal cell needs to be located within its volume190. This is something that
is not always possible, as tetrahedral at the boundaries may be very at, having their circumcenters
outside the model's domain, while the Delaunay criterion still remains fulfilled. Situations like this
are especially encountered at sharp concavities of the geometric model, and several suggestions have
been made in order to overcome this issue. Other possibilities include non-Delaunay tetrahedral
meshes, hexahedral or even mixed meshes, which are further discussion. Finally, the advantage of
indirect mesh generation lies in the fact that efficient algorithms can be implemented in order to
obtain topologically involved dual meshes, based on primal meshes with simple topology.
Furthermore, the primal meshes, themselves, can be created following equally efficient and well-
studied algorithms. This approach leads into an elective two-step mesh generation, rather than an
expensive, direct one.
6.4.3 Methodology
Given a triangular mesh in 2D, such as that of (Error! Reference source not found.), a polygonal m
esh is formed by following the principle that a dual cell will be formed around every primal vertex.
In the interior of the domain, this one-to-one correspondence between primal and dual entities
extends to other types as well, with one dual edge per primal edge and a dual vertex for every primal
face. However, generation of polygonal faces on the boundary demands for additional dual edges and
vertices, at specific locations of the boundary that denote the classification of primal entities as
significant. An slightly modified approach of the generic polygon mesh generation, as previously
described, is used to obtain a variation known as median polygon. This method differentiates itself
by considering as significant every existing primal edge, thus creating dual vertices at the midpoints
of primal edges lying in the interior as well. These dual vertices become, consequently, vertices of
190Rao V. Garimella, Jibum Kim, and Markus Berndt, “Polyhedral mesh generation and optimization for non-
manifold domains”, Proceedings of the 22nd International Meshing Roundtable, pages 313-330. Springer
International Publishing, 2014.
126
Figure 6.19 (a) Cut of initial tetrahedral mesh of a simple 2-material model (b) Cut of initial polyhedral
mesh showing valid (green) and invalid (red) elements (c) Cut of untangled and optimized polyhedral
mesh (d) Full polyhedral mesh
In another paper by [Lee]192, 3 methods are investigated to remove the concavity at the boundary
191 Rao V. Garimella, Jibum Kim, and Markus Berndt, “Polyhedral Mesh Generation and Optimization for Non-
manifold Domains”, Los Alamos National Laboratory, Los Alamos, NM, USA, 2013.
192 Sang Yong Lee, “Polyhedral Mesh Generation and A Treatise on Concave Geometrical Edges”, 24th International
edges and/or vertices during polyhedral mesh generation by a dual mesh method. The non-manifold
elements insertion method is the first method examined. Concavity can be removed by inserting non-
manifold surfaces along the concave edges and using them as an internal boundary before applying
Delaunay mesh generation. Conical cell decomposition/bisection is the second method examined.
Any concave polyhedral cell is decomposed into polygonal conical cells first, with the polygonal
primal edge cut-face as the base and the dual vertex on the boundary as the apex. The bisection of
the concave polygonal conical cell along the concave edge is done. Finally the cut-along-concave-
edge method is examined. Every concave polyhedral cell is cut along the concave edges. If a cut cell
is still concave, then it is cut once more. The first method is very promising but not many mesh
generators can handle the non-manifold surfaces especially for complex geometries. The second
method is applicable to any concave cell with a rather simple procedure but the decomposed cells
are too small compared to neighbor non-decomposed cells. Therefore, it may cause some problems
during the numerical analysis application. The cut-along-concave-edge method is the most viable
method at this time. In this paper, discussions are presented with the boundary conforming Delaunay
primal mesh to reduce the complexity that may be expected at the boundary. Polygonal prism mesh
generation at the viscous layer is also investigated. The polyhedral mesh generated is shown in
Figure 6.19 (d). It is shown that all of the concave polyhedral cells are cut into convex conical cells.
It is noted that this method is very straightforward and can be applicable to any shape of concave
polyhedral cell.
(a) Triangular primal (b) 1st generation dual (c) 2nd generation prime
mesh (Black) and 1st mesh (Red) and 2nd mesh (Black) and 3rd
generation Polygonal generation triangular generation dual mesh
dual mesh (Red) dominant prime mesh (Red)
(Black)
Figure 6.20 Boundary Layer Prisms Generated on a Cascade of a 2D Triangulation and Dual Polyhedron
129
193 Mavriplis, DJ. ”Adaptive mesh generation for viscous flows using Delaunay triangulation”, Journal
computational. Phys. 1991.
194 Mavriplis, DJ, ”Unstructured and adaptive mesh generation for high-Reynolds number viscous
flows”Proceedings of the International Conference on Numerical Grid Generation: Computational Fluid Dynamics
130
[Vallet et al]195, [Castro-Diaz et al 1995]. By defining a mapped space based on the desired amount
and direction of stretching, an isotropic Delaunay triangulation can be generated in this mapped
space that, when mapped back to physical space, provides the desired stretched triangulation.
Difficulties with such methods involve defining the stretching transformations and determining
suitable point distributions for avoiding obtuse triangular elements.
An alternative to the above approaches is to generate a locally structured or semi structured mesh in
the regions where high stretching is required. One approach [Nakahashi 1987, Ward & Kallinderis
1993] attempts to preserve the mesh structure in the direction normal to the boundary up to a
specified distance away from the boundary, after which fully unstructured isotropic meshing
techniques are employed. Special care must be taken in this case to avoid mesh cross-overs in regions
of concave curvature and to ensure a smooth transition between the structured and unstructured
region of the mesh. Another strategy [L¨ohner]196; [Pirzadeh]197; [Connell & Braaten 1995] consists
of generating a semi structured mesh, where the “stack” of mesh cells emanating from each individual
boundary face may terminate independently from those at other boundary faces, as shown.
Termination of these “advancing-layers” [Pirzadeh]198 is triggered when the local cell aspect-ratio
approaches unity, or when cross-over with other cells is detected, such as in concave corners. The
remaining region is then gridded with a conventional isotropic unstructured mesh generation
approach. The resulting structured or semi structured meshes can either be conserved as local
structured entities of quadrilaterals in two dimensions and prisms in three dimensions (since the
surface grid is generally assumed to be triangular), or the different element types may be divided
into triangles or tetrahedral in two or three dimensions, respectively.
and Related Fields, 3rd Barcelona, Spain, ed. AS Arcilla, J Hauser, PR Eisman, JF Thompson, pp. 79–92. New York:
North-Holland, 1991.
195 Vallet MG, Hecht F, Mantel B., “Anisotropic control of mesh generation based upon a Voronoi type method”.
196 L¨ohner R.,” Matching semi-structured and unstructured grids for Navier-Stokes calculations”, AIAA 1993.
197 Pirzadeh S.,”Viscous unstructured three dimensional grids by the advancing-layers method”, AIAA, 1994.
198 Pirzadeh S. 1994, AIAA J. 32(8):1735–37.
199 Philippe Young, “ Meshing from Image Data with Simpleware“, Synopsys, Inc. 2015.
131
Figure 6.22 Comparison of traditional voxel mesh with Simpleware mesh preserving segmented
domains without decreasing data resolution
200Hadass R. Jessel, Sagi Chen, Shmuel Osovski, Sol Efroni, Daniel Rittel & Ido Bachelet, “Design principles of
biologically fabricated avian nests”, Scientific Reports, March 2019.
132
6.7.1.1 Methods
6.7.1.1.1 Nests
Five nests were purchased from commercial bird nest farms in Selangor, Malaysia. The nests were
harvested from the farms. They were shipped directly to our laboratory. On receipt of the nests, they
were immediately scanned. The vendor confirmed that the provided nests were cleaned and
processed without bleaching agents, and untreated with coloring or artificial preservatives. The nests
were stored in separate closed containers with a relative humidity of 80% and a temperature of 25
°C throughout the research.
6.7.1.1.2 CT Scans
μCT scans were performed on a SkyScan 1176 high-resolution μCT (SkyScan, Aartselaar, Belgium).
After adjusting the appropriate parameters for scanning, each nest was positioned on the specimen
stage and scanned with an isotropic resolution of 34.04μm, rotational step of 0.700 degrees, and a 41
ms exposure time (tube voltage 40 kV, tube current 600 μA with no filter).
6.7.1.1.3 Image Processing
μCT datasets were imported into Simpleware ScanIP M-2017.06-SP1 (Synopsys, Mountain View,
USA), an image processing software used to visualize and segment regions of interest (ROIs) from
133
volumetric 3D data. This software imports a stack of images from μCT slices in a wide variety of
software formats (in this case, approximately 2500 bitmap files), allowing steps of visualization and
assisted segmentation based on image density thresholding of different grayscale intensities. The
following tools and workflow described were applied to all five nests. Image sequence of each μCT
scanned nest was imported with a pixel spacing value of 34.04um in x,y and z with a background type
of 8-bit unsigned integer. The image sequence was resampled by linear interpolating to a pixel size
resolution of 68.1μm to down sample image data. The segmentation was then used to generate the
volumes (binary volumes) that are called masks, which define how the objects fill the space.
Several segmentation tools were used to create the masks from the background image data, which
were modified until they showed a satisfactory mask. There are some artifacts and noise from μCT
data, which can be corrected by filtering when the images are reconstructed. A threshold tool was
used for segmenting the nest models based on grayscale intensities. Changing the threshold values
of two-dimensional regions on the imported stack of images was done in order to select only the nest
and exclude background noise. Grayscale boundaries were set to a lower value of 40 and an upper
value of 255. A Flood-fill tool was used to remove non-connected artifacts from the mask, this
algorithm is a connectivity-based algorithm and was used on the active mask.
A recursive Gaussian filter was used with a sigma value of 1 in each direction to reduce image noise
and reduce detail level. Closed pores with less than 125 voxels were selected and added to the mask
to reduce computational time and to create a higher quality of the generated elements. ScanIP was
used for the 3D volumetric visualization, analysis, and measurement. The morphometric parameters
of the whole nest were calculated by the software. The following parameters used were: mass
density, the volume of the nest, nest surface area, and pore analysis. Average mass density was
defined as the ratio of nest mass (measured with a scale) to nest volume measured by scanIP 3D mask
analysis. Measuring the distribution of the mass density, surface area and closed pores along different
axes of the nest was done by generating a segmented mask for the nest and a separate segmented
mask for the closed pores. A Slice-by-Slice script was written for slicing the masks in yz, xz and xy
coordinate planes and finally the data of the pore and the nest masks were analyzed in every slice. A
pore multi-label mask was generated from the segmented mask of the closed pores.
Generating the multi-label mask was done to interactively visualize and analyze the pore mask that
contained several regions (scattered pores in between the strands of saliva). The pore multi-label
mask was created by labeling disconnected regions within the pore mask, where each distinct region
was given a separate color. Further details available form [Jessel et al.]201.
201Hadass R. Jessel, Sagi Chen, Shmuel Osovski, Sol Efroni, Daniel Rittel & Ido Bachelet, “Design principles of
biologically fabricated avian nests”, Scientific Reports, March 2019.
134
135
7.1 Case Study - Anisotropic Mesh Generation via Discretized Riemannian Delaunay
Triangulations
Due to its wide array of practical applications, anisotropic mesh generation has received considerable
attention and several classes of methods have been proposed. In this study, the generation of
anisotropic meshes using the concepts of Delaunay triangulations and Voronoi diagrams, has
been investigated by [Labbe]203. First, consider the framework of locally uniform anisotropic meshes
introduced by [Boissonnat, et al.]204. Despite known theoretical guarantees, the practicality of this
approach has only been hardly studied. An exhaustive empirical study is presented and reveals the
strengths but also the overall impracticality of the method. The ideal shape of simplex has so far been
described as the regular simplex, but this is not always the case. We follow closely the development
by [Labbe]205-206. The category for discretization of these are:
1 Algorithms based on the concepts of the Delaunay triangulation and the Voronoi
diagram,
2 Algorithms based on an embedding of the input domain to simplify the problem,
3 Algorithms based on the optimization of particles.
The different approaches that we consider here are all based upon extending the notions of Voronoi
diagrams and Delaunay triangulations to the anisotropic setting. We hope to benefit from the
known results and theoretical soundness of the isotropic Delaunay triangulation and Voronoi
diagram to generate anisotropic meshes with provable and practical meshing techniques. As all our
methods are based upon the same structures, we dedicate a chapter to introducing the notions that
will be used throughout this thesis. Our main chapters follow a logical progression, with each method
202 Mael Rouxel-Labbe, “Anisotropic mesh generation”, Université Côte d’Azur, 2016.
203 Mael Rouxel-Labbe, “Anisotropic mesh generation”, Université Côte d’Azur, 2016.
204 Boissonnat, J.-D., Chazal, F., and Yvinec, M. Geometry and Topology Inference. in preparation.
205 Mael Rouxel-Labbe, “Anisotropic mesh generation”, Université Côte d’Azur, 2016.
206 M. Rouxel-Labbéa, M. Wintraeckenb, J.D. Boissonnatb, “Discretized Riemannian Delaunay triangulations”,
taking more metric information into account to determine the connectivity and placement of points
than the previous ones.
We begin with a thorough practical investigation of the framework of locally uniform anisotropic
meshes, a theoretically sound meshing technique proposed by [Boissonnat et al.]207 that is based on
the idea of constructing at each point a triangulation that is well adapted to the local metric. The
theoretical aspect of their approach has been extensively described, but its practicality is
comparatively lesser known. We detail our implementation, which is both more robust and faster
than the one previously presented in the short experiment investigation of the algorithm for surfaces,
investigate the role of the numerous parameters, and give some results. Limitations of the approach
are then exposed, along with our attempts to address those. In the Euclidean setting, the Delaunay
triangulation of a point set can be constructed by first generating the Voronoi diagram of the point
set and then computing the dual of this diagram. Anisotropic Voronoi diagrams have been considered
to build anisotropic triangulations, however the dual of an anisotropic Voronoi diagram is not
necessarily a valid triangulation and elements can be inverted. Different distances are possible to
create such anisotropic Voronoi diagrams.
Along with the introduction of their anisotropic distance, [Labelle and Shewchuk]208 presented a
refinement algorithm that generates a point set for which their anisotropic Voronoi diagram has a
valid dual triangulation. However, their method is limited to the setting of planar domains. We give
requirements on point sets such that the dual of the anisotropic Voronoi diagram is a nice
triangulation and propose a refinement algorithm to generate such point set. Our proof links
anisotropic Voronoi diagrams built using the distance of [Labelle and Shewchuk]209 with the
Ω Domain
P Point set
G Metric
F Square root of a metric
λi , vi Eigenvalues and eigenvectors of a metric
φ(G1,G2) Distortion between two metrics G1 and G2
g0 Uniform metric field
g Arbitrary metric field
dG Distance with respect to the metric G
‖.‖G Norm with respect to the metric G
dE Distance with respect to the Euclidean metric E
dg Geodesic distance with respect to the metric field g
Vord(P) Voronoi diagram of P using the distance d
Del(P) Abstract Delaunay complex of the point set P
DelG(P) Delaunay complex of the point set P with respect to G
Delg0(P) Delaunay complex of the point set P with respect to g0 (uniform metric field)
Sp Star of p
Svp Restricted volume star of p
Svp Restricted surface star of p
207 Boissonnat, J.-D., Dyer, R., and Ghosh, A. Delaunay stability via perturbations. Int. J. Com. Geo.& App., (2014).
208 Labelle, F., and Shewchuk, J. R. Anisotropic Voronoi diagrams and guaranteed-quality anisotropic mesh
generation. Proceedings of the nineteenth annual symposium on Computational geometry (New York, NY, USA,
2003), ACM Press, pp. 191–200.
209 see Previous.
137
cosphericity (used in locally uniform anisotropic meshes) and of protection of a point set. The second
part of this chapter introduces a refinement algorithm based on the combination of the alternative
point of view of the anisotropic Voronoi diagram as the restriction of a high-dimensional power
diagram to a fixed paraboloid manifold and the tangential Delaunay complex, a structure used in
manifold reconstruction that is well-adapted to the high-dimensional setting. We detail our
implementation, study its theoretical grounds and investigate the practicality of the algorithm.
Anisotropic Voronoi diagrams studied by previous authors compute and compare distances using a
fixed metric, justifying this approximation by invoking the computational and time complexity of
computing geodesics in a domain endowed with a metric field. To better facilitate, Table 7.1 is the
list of symbols used.
210 Preparata, F. P., and Shamos, M. Computational geometry: an introduction. Springer Science & Business
2012.
211 Labelle, F., and Shewchuk, J. R. Anisotropic Voronoi diagrams and guaranteed-quality anisotropic mesh
generation. Proceedings of the nineteenth annual symposium on Computational geometry, NY, 2003.
212 Du, Q., and Wang, D. Anisotropic centroid Voronoi tessellations and their applications. SIAM Journal on
Figure 7.1 Representation of a 3D Metric with Eigenvalues λ1, λ2 and λ3 as an Ellipsoid – (Courtesy of
[Labbe])
213 Borouchaki, H., George, P. L., Hecht, F., Laug, P., and Saltel, E. Delaunay mesh generation governed by metric
specifications. part I algorithms. Finite Elem. Anal. Des. 25, 1-2 (1997), 61–83.
214 Diaz, M. C., Hecht, F., Mohammadi, B., and Pironneau, O. Anisotropic unstructured mesh adaptation for flows
7.1.1.3 Distortion
The notions of metrics and metric fields are introduced to convey the stretching of spaces. To create
a solid theoretical framework, it is required to be able to express how differently two metrics see
distances and geometrical objects. For this purpose, [Labelle and Shewchuk] introduced the concept
of distortion between two metrics. We recall here their definition. Properties and limitations as well
as a new alternative that remedy those shortcomings are proposed by [Labelle and Shewchuk]
introduced the concept of distortion between two points p and q of Ω as
217 Preparata, F. P., and Shamos, M. Computational geometry: an introduction. Springer Science & Business, 2012.
218 Boissonnat, J.-D., Wormser, C., and Yvinec, M. Locally uniform anisotropic meshing. Proceedings of the
twenty-fourth annual symposium on Computational geometry (2008), ACM, pp. 270–277.
219 Mavriplis, D. J. Adaptive mesh generation for viscous flows using triangulation. Journal of computational
specifications. part Ialgorithms. Finite Elem. Anal. Des. 25, 1-2 (1997), 61–83.
221 Dobrzynski, C., and Frey, P. Anisotropic Delaunay mesh adaptation for unsteady simulations. Proceedings of
generation. Proceedings of the 19th annual symposium on Computational geometry (New York, NY, USA, 2003),
ACM Press, pp. 191–200.
224 Du, Q., and Wang, D. Anisotropic centroid Voronoi tessellations and their applications. SIAM Journal on
metric attached to the vertex. The use of anisotropic stars, inspired by the works of [Shewchuk]226
and [Schoen]227, is combined with the sliver removal techniques proposed by [Li and Teng]228-229.
These techniques are adapted to the anisotropic setting to construct a star-based refinement
algorithm that cleverly selects refinement points. The algorithm works in any dimension and offers
guarantees on its termination and on the quality (size and shape) of the final simplexes.
7.1.1.5 The Star Set
Many algorithms have been devised to construct Euclidean Delaunay triangulations, but they cannot
be simply extended to arbitrary metric fields. Although the computation of a curved Riemannian
Delaunay triangulation is difficult, it is easy to compute the Delaunay triangulation of a set of points
P with respect to the metric Gp (Delp(P)). Any metric-dependent geometric construction on a set of
points P, like the Voronoi diagram or a simplex circumcircle, can therefore be obtained for the metric
Gp by the following set of operations. First, compute the transformed point set. Then, compute the
construction with the Euclidean norm on and transform the result back through . The triangulation
is thus simply the image through the stretching transformation F−1p of the Euclidean Delaunay
triangulation Del(Fp(P)) where Fp(P) = {Fppi, pi ∈ P}. As explained before, a sphere in metric space
is an ellipsoid in the Euclidean space. Each simplex of a uniformly anisotropic Delaunay triangulation
Delp(P) thus possesses an empty circumscribing ellipsoid, the inverse-transformed Delaunay ball
from metric to Euclidean space (see Figure 7.2).
Figure 7.2 An anisotropic Uniform Delaunay Triangulation (orange) and the Corresponding Stretched
Delaunay Balls and Circumcenters (black circles) - (Courtesy of [Labbe])
The central idea of the framework of locally uniform anisotropic meshes is to approximate at each
vertex p a given arbitrary metric field g by the uniform metric defined by extending Gp over the
domain. Similarly to the way affine functions are locally good approximations of a generic continuous
function, the approximation of an arbitrary metric field by a uniform metric will be accurate as long
we stay in a small neighborhood. At each vertex, a Delaunay triangulation that conforms to the
226 Shewchuk, R. Star splaying: an algorithm for repairing Delaunay triangulations and convex hulls. Proceedings
of the twenty-first annual symposium on Computational geometry (New York, NY, USA, 2005).
227 Schoen, J. Robust, guaranteed-quality anisotropic mesh generation. M.S. thesis, UC at Berkeley, 2008.
228 Li, X.-Y. Sliver-free Three Dimensional Delaunay Mesh Generation. PhD thesis, University of Illinois at Urbana
uniform metric field of that vertex is constructed. These independent triangulations can under some
density conditions be combined to obtain a final triangulation of the domain.
7.1.1.6 Stars and Inconsistencies
The star of a vertex p in a simplicial complex K, denoted by Sp, is defined as the sub-complex of K
formed by the set of simplexes that are incident to p. The idea of considering independent stars at the
vertices of a point set
was first conceived by
[Shewchuk] to handle
moving vertices in finite
element meshes. This
structure was also
employed by [Schoen],
who introduced
anisotropic stars whose
connectivity is obtained
by building an isotropic
Delaunay mesh of a
transformed point set.
The star Sp of p ∈ P is in
that case extracted from
the complex Delp(P).
This construction was Figure 7.3 Two stars Sp and Sq forming an inconsistent
described in the configuration - (Courtesy of [Labbe])
previous section and
forms the core of the locally uniform anisotropic mesh framework. The collection of all the stars is
called the (anisotropic) star set of P and is noted S(P). As the connectivity of each star is set according
to the metric Gp at the center of the star, a given n-simplex has n + 1 different Delaunay balls, one
with respect to the metric of each vertex of the simplex. Consequently, there are in general
inconsistencies among the stars of the sites: a simplex τ appearing in the stars of some of its vertices,
may not appear in the stars of all of them (Figure 7.3). If a simplex is involved in such configuration,
it is said to be inconsistent. Stars containing such simplexes are inconsistent stars and a star set
with at least one inconsistent star is also said to be inconsistent. Oppositely, a star whose simplexes
are consistent is said to be consistent and a star set is if all of its stars are consistent. The main idea
of the algorithm is to refine the set of sites P while maintaining the set of stars S(P) until each star Sp
in S(P) is composed of simplexes that are well shaped and well sized in the metric Gp, and until there
are no more inconsistencies among the stars. Once a consistent star set is achieved (which is proven
to happen), all the stars can be stitched into a single triangulation: a locally uniform anisotropic mesh
that conforms to the specified metric field and offers guarantees on the quality of the simplexes.
7.1.2 Refinement Algorithm
The simplest idea to refine a simplex τ in a star Sp is to insert a new site at the center cp(τ) of the
Delaunay Gp-ball of τ. This technique is very common in Delaunay refinement algorithm as the
Delaunay ball of the simplex is by construction not empty after the insertion of its center and thus
the simplex cannot appear in the new Delaunay triangulation. This simple strategy may unfortunately
lead to cascading occurrences of inconsistencies, for the same reason that the refinement of Delaunay
meshes cannot remove slivers. An alternative strategy is devised, inspired by the work of [Li and
Teng] to avoid slivers in isotropic meshes. The adaptation of [Li and Teng’s] techniques to the present
algorithm and the development of the refinement algorithm are described in detail in the following.
142
• The success percentage of the Pick_valid procedure increases when φ0 decreases, as the
theory predicts. However, a low value of φ0 will very quickly be responsible for the insertion
of an exceptionally large number of vertices (red curves).
• The final number of points is greater whenever φ0 is used than when it is not (blue curves).
• However small φ0 is, there are still inconsistencies that appear (the teal curves show the
number of vertices inserted to solve inconsistencies), proving that the Pick_valid procedure
Figure 7.4 Influence of the Parameter ψ0 in a 2D (shown on the left) and 3D Domain (shown
on the right) - (Courtesy of [Labbe])
143
is a necessary part of the algorithm. Even more interestingly, they add a relatively constant
number of vertices, indicating that the distortion rules did not help.
A
Q = 4√3
ph
Eq. 7.5
where A is the area of the triangle, p the perimeter and h the longest edge (all computed in the metric).
For cells, we use the quality estimation of [Frey and George]231
V2
Q = 216√3 3
𝐴Σ
Eq, 7.6
where V is the volume of the tetrahedron, and A∑ the sum of the areas of the four facets (all compute
in the metric). Both these quality measures live between 0 and 1, with 1 signaling the highest quality.
Results are detailed in Table 7.2. As expected, more solutions are available in the pick valid
algorithm when the picking region is enlarged, resulting in smaller meshes. However, the metric is
followed more loosely and the simplexes of the meshes have lower quality. Note that in extreme cases
–δ close to 1 – the final number of vertices starts to increase again, as the refinement points are often
230 Zhong, Z., Guo, X., Wang, W., Lévy, B., Sun, F., Liu, Y., and Mao, W. Particle-based anisotropic surface meshing.
ACM Trans. Graph. 32, 4 (2013).
231 Frey, P., and George, P. L. Mesh generation: Application to finite elements. Hermes Science, 2008.
144
not creating satisfying elements with respect to other criteria, such as the shape. The parameter β is
assigned the value 2.5 by default, but changing this value (within 1 to 5) has barely any influence on
the outcome.
Table 7.2 Comparison of the number of vertices and quality of the mesh for different values
of δ - (Courtesy of [Labbe])
7.1.3.4 Parameters σ0
The parameter σ0 controls the maximal slivery of the simplexes. A majority of inconsistencies do not
stem from slivery quasi-cosphericities, and thus this parameter has little influence on the outcome.
By increasing its value, one simply trades the removal of inconsistencies for the removal of silvers,
but the result stays the same. Furthermore, the slivery and inconsistent queue both rely on the
Pick_valid procedure to insert a point and thus there is no difference in the running time of the
algorithm either.
7.1.4.3 Starred
Our first example uses the Starred metric field, detailed in Appendix A of [Labbe]232. The anisotropy
of this metric field varies between 1 and 10. The Starred metric field possesses long compared to the
prescribed size of the elements – regions of straight anisotropy, and regions where the metric field is
rotating (and the anisotropy ratio is lower) in between, thus providing a good first example of an
arbitrary metric field. The domain is a square of side 10, centered on the origin. Figure 7.5- A square
of side 10 and centered on the origin, endowed with the Starred metric field (left). The final mesh is
composed of 47126 vertices and 94366 triangles. On the right, a zoom on one of the rotating regions.
The algorithm terminates without any issue and the size and shape criteria are solved quickly (4499
vertices are needed). However, the resolution of inconsistencies is difficult and requires around
40000 additional vertices. While regions where the metric field is rotating are clearly suffering from
inconsistencies, regions where the metric field is straight fare only slightly better and also require
many vertex insertions to solve inconsistencies.
Figure 7.5 A square of side 10 and centered on the origin, endowed with the “Starred” metric field
(left) - (Courtesy of [Labbe])
7.1.4.4 Hyperbolic
The hyperbolic shock metric field has already been used several times in previous sections and its
definition is detailed. This metric field is characterized by an anisotropy ratio that varies between 1
and 15 and is interesting as its anisotropy ratio does not vary (too much) along the shock, despite the
shock being shaped like a sinusoidal curve. We shall refer to the regions of the shock where the
eigenvectors change rapidly as the“turns”. In these turns, the process of generating a good
anisotropic mesh is difficult as the eigenvectors of the metrics are changing rapidly. As larger meshes
have already been produced in other sections and very dense refinement was observed at the turns,
we here zoom on one of these regions. The result is shown in Figure 7.6. The size and shape
constraints are quickly satisfied and only require around 414 vertices, but the final mesh is composed
of 4621 vertices. Indeed, the resolution of inconsistencies is difficult especially within the shock and
many vertices are required to obtain a consistent mesh despite a relatively low maximum anisotropy
ratio. Consequently, the metric field is honored (and consequently the sizing field is too), but
simplexes are often much smaller and then resolution of inconsistencies much longer than what we
232 Mael Rouxel-Labbe, “Anisotropic mesh generation”, Université Côte d’Azur, 2016.
146
hoped for.
Figure 7.6 Anisotropic Triangulation of a Rectangle Endowed with the Hyperbolic Shock
Metric Field - (Courtesy of [Labbe])
7.1.4.4.1 Swirl
The Swirl metric field aims to
represent a whirling phenomenon.
Contrary to the two previous
metric fields, the Swirl metric field
has a relatively constant
anisotropy ratio and is rotating
almost everywhere. Figure 7.7
shows the mesh obtained by our
implementation for a square of
side 6 endowed with this metric
field. The result exhibits the same
issues as in the previous
experiments: the resolution of
inconsistencies is difficult and
many vertices are required to
solve inconsistencies after all
other criteria are satisfied.
7.1.4.4.2 Curvature-Based
Metrics Fields on
Surfaces
We now consider the setting of Figure 7.7 A square of side 6 and centered on the origin,
domains embedded in R3, and the endowed with the “Swirl” metric field - (Courtesy of Labbé et al.)
generation of pure surface
147
meshes. The metric field induced by the curvature of the domain is known to prescribe an anisotropy
that is asymptotically optimal - a mesh whose elements follow this metric field will require the
lowest number of element (out of all the meshes) to achieve a given approximation of the domain. It
is thus interesting to observe the results produced by our algorithm for this specific metric field.
7.1.4.4.3 Optimization
Optimization is often used to improve the quality of a triangulation. Centroid Voronoi tessellations
are Voronoi diagrams whose generators are also the centroids (centers of mass) of their respective
cells. The famous Lloyd algorithm233 iteratively moves the seeds to the center of mass of their
respective cell and recomputed the Voronoi diagram of this new seed set. We approximate the
Riemannian Voronoi center of mass of a cell V with the following formula:
∑i ci |t i |gi
cg =
∑i|t i |gi
Eq. 7.7
where |ti|gi is the area of the triangle ti in the metric gi and the ti make a partition of V. The canvas
conveniently provides this decomposition of a geodesic Voronoi cell in small triangles, making the
approximation of cg accurate. The formula in Eq. 7.7 does not extend to surfaces as the result of the
weighted sum might not lie on the domain. In that setting, we use a process similar to [Wang et al.]234.
As its Euclidean counterpart, this algorithm comes with no guarantees (not even for termination) but
works well in practice. In Figure 7.8, the initial SRDT of 4000 seeds has been optimized with 100
iterations. The metric field is well captured with few elements, especially in the rotational region.
7.1.4.5 Discrete Riemannian Voronoi Diagrams
Several authors have considered Voronoi diagrams based on anisotropic distances to obtain
triangulations adapted to an anisotropic metric field. These authors hoped to build upon the well-
established concepts of the Euclidean Voronoi diagram and its dual structure, the Delaunay
triangulation, for which many theoretical and practical results are known. The computation of
geodesic path lengths in any domain is a difficult task as there is generally no closed form available.
The wide range of domain can be shrunk, through mesh generation, to consider only piecewise-linear
domains. Even in this simpler setting, the computation of geodesic distances and paths is still a
complex problem to which much work has been dedicated in the last decades. Despite many studies,
geodesic distances still cannot be obtained exactly for most domains endowed with an arbitrary
metric field. Nevertheless, we introduce a discrete structure that is, under some conditions,
combinatorically equivalent to the Riemannian Voronoi diagram and whose duals are triangulations.
7.1.4.5.1 Advantages Over Isotropic Canvasses
To ensure that the nerve of the Riemannian Voronoi diagram is captured in the case of an isotropic
triangulation used as canvas, the (uniform) sizing field of the canvas must be small enough such that
Voronoi bisectors are clearly distinct. As the anisotropy ratio increases, Voronoi cells become thinner
and the number of canvas vertices required to capture the nerve rapidly grows (Figure 7.9, left and
center235). On the other hand, the placement of vertices in an anisotropic canvas is by construction
not uniform and does not suffer from the same issue: as the anisotropy of the metric field grows,
Voronoi cells and canvas simplices become thinner in tandem. The number of canvas vertices in a
233 S. Lloyd, Least squares quantization in pcm, IEEE Trans. Inf. Theo. 28 (2006) 129–137.
234 X. Wang, X. Ying, Y.-J. Liu, S.-Q. Xin, W. Wang, X. Gu, W. Mueller-Wittig, Y. He, Intrinsic computation of
centroidal Voronoi tessellation (CVT) on meshes, Computer-Aided Design 58 (2015) 51–61.
235 In the case of an isotropic canvas, increasing the anisotropy of a cell increases the number of vertices
required to properly capture it (left and middle). This is not the case if the canvas can be anisotropic (left and
right).
148
Voronoi cell is thus relatively constant regardless of the anisotropy. As the star set satisfies a sizing
field of 0.1r0, the canvas edges are roughly 10 times smaller than the distance between seeds.
Consequently, the canvas is composed of approximately 10n more vertices than seeds, with n the
intrinsic dimension of the domain. The use of an anisotropic canvas greatly decreases the
computational time as the number of vertices in the canvas is drastically reduced, without any change
in the extracted nerve.
7.1.4.5.2 Straight Riemannian Delaunay Triangulation
By definition, the Riemannian Voronoi diagram captures the metric field more accurately than other
methods that typically only consider the metric at the vertices. This additional input of information
allows us to construct curved Riemannian Delaunay triangulations, but also has a positive influence
on the straight realization of the diagram. Figure 7.10 shows the different structures involved in our
algorithm during the generation of an anisotropic meshes for the sphere endowed with a hyperbolic
shock metric field236. On the left is the canvas (an isotropic triangulation here); the middle sphere
shows the discrete Riemannian Voronoi diagram computed upon this canvas; finally, the right picture
shows the dual of the discrete diagram, an anisotropic triangulation. Contrary to the previous
approaches introduced and investigated no over-refinement is observed, including in the regions
where the eigenvectors of the metric field are rotating (where the shock turns). The final mesh has
slightly fewer than 4000 vertices, which is the number of vertices that was required by our locally
Figure 7.8 The Optimized SRDT of 4000 Seeds in a Planar Domain Endowed with a Hyperbolic Shock
Induced Metric fFeld (left). On the right, a zoom on a rotational region of the metric field shows the
Difference between pre- (above) and post- (bottom) Optimization – (Courtesy of Labbé et al.)
236The unit sphere endowed with the hyperbolic metric field (approximately 4000 vertices). Isotropic canvas
(left), discrete Riemannian Voronoi diagram (center), and straight Riemannian Delaunay triangulation (right).
149
uniform anisotropic meshes to generate a mesh of a only small region of that domain.
Figure 7.9 Unit Sphere Endowed with the Hyperbolic Metric field - (Courtesy of [Labbe])
150
237 Mael Rouxel-Labbe, “Anisotropic mesh generation”, Université Côte d’Azur, 2016.
238 Boissonnat, J.-D., Wormser, C., and Yvinec, M. Locally uniform anisotropic meshing. Proceedings of the
twenty-fourth annual symposium on Computational geometry (2008), ACM, pp. 270–277.
239 M. Rouxel-Labbé, M. Wintraeckenb, J.D. Boissonnat, “Discretized Riemannian Delaunay triangulations”, 25th
Figure 7.12 Discrete Riemannian Voronoi Diagram (top) and Curved Riemannian Delaunay
Triangulation (bottom) Endowed with the Hyperbolic Shock Metric Field - (Courtesy of
[Labbe])
152
153
8 Hybrid Meshes I
A hybrid grid contains a mixture of structured portions and unstructured portions. It integrates the
structured meshes and the unstructured meshes in an efficient manner. Those parts of the geometry
that are regular can have structured grids and those that are complex can have unstructured grids.
These grids can be non-
conformal which means that
grid lines don’t need to match at
block boundaries240. In recent
years due to accuracy
consideration while capturing
the physics (sun-layer in
boundaries), and in the same
time added flexibility in domain
discretization (automated
meshing using tetrahedral,
polyhedral, etc.), received lots of
attention. Figure 8.1 shows a
Hybrid mesh obtained from a
STL surface using an Figure 8.1 Hybrid Grid and Steady State Solution
OpenFOAM© meshing module
and solution for a steady-state incompressible turbulent flow.
Unstructured Mesh Generation. International Journal for Numerical Methods in Engineering, 2003.
244 Weatherill, N. P., Hassan, O., Morgan, K., Jones, J. & Larwood, B., “Towards Fully Parallel Aerospace Simulations
prismatic layers must be kept. Figure 8.2 (a) shows the cavity in a grid where the grid around a
rotated airfoil is inserted into a grid around another airfoil. The cavity is formed by a set of inner and
outer boxes. Inside the cavity a grid is generated by the advancing front algorithm. The merged grid
is shown in Figure 8.2 (b). An application of this technique together with the flow solver Edge, see
[Eliasson]246, for store separation computations are reported in [Berglind]247 and [Berglind et al.]248.
246 Eliasson, P., Nordstr¨om, J., Peng, S-H. & Tysell, L.,”Effect of Edge-based Discretization Schemes in
Computations of the DLR F6 Wing-Body Configuration”. Paper AIAA-2008-4153.
247 Berglind, T., Numerical Simulation of Store Separation for Quasi-Steady Flow. FOI-R-2761-SE, FOI, Swedish
Grid Generation, International Society of Grid Generation (ISGG), Montreal, Canada, 2009.
250 Ghidoni, A., Pelizzari, E., Rebay, S. & Selmin, V., 3D Anisotropic Unstructured Grid Generation. International
Numerical Grid Generation, International Society of Grid Generation (ISGG), Montreal, Canada.
253 Pirzadeh, S. 2008, Advanced Unstructured Grid Generation for Complex Aerodynamic Applications. Paper
AIAA-2008-7178.
155
even better background grid by first generating a coarse volume grid and then interpolating the
surface cell size specification into this grid and finally using this grid as a new background grid is
given in [Tysell ]254. This is an alternative to the octree background grid generation method given in
[McMorris & Kallinderis]255. For further details, please consult the work by [Tysell]256.
8.1.4 Boundary Viscous Meshes & Sharp Corners
In [Aubry & L¨ohner]257 the same prismatic grid generation algorithm as was given in [Tysell] to
compute the most visible normal vector to a surface is described. A problem in prismatic grid
generation is the quality of the grid in corners of the surface. [Sharow et al.]258 suggest this can
be solved by generating extremely fine surface grids at corners, having a cell size of about the height
of the first cells in the prismatic layer. A drawback of this approach is of course that the number of
cells will increase considerably. [Soni et al.]259 use the concept of a semi-structured topology in the
near surface regions, generated by a parabolic grid generation algorithm. In this concept corners can
be handled by excluding or introducing nodes from one layer to the next layer. Khawaja et al. (1999)
introduce the concept of varying number of prismatic layers, where different surface nodes can have
different number of layers. This concept has also been used in [Tysell ]260. For some surface nodes
there may not be possible to define a visible normal vector. Thus a prismatic layer cannot be
generated at these nodes. A remedy is to use multiple normal vectors at these nodes. The use of
multiple normal also improve the grid quality at sharp convex corners like wing trailing edges. In
[Steinbrenner & Abelanet]261 difficult regions, especially concave regions, is handled by collapsing
cells. Thus, the number of nodes in one layer may be less than in the previous layer. This kind of
technique can only be used for layers consisting of stretched tetrahedra instead of prisms. Further
details can be obtained at [Tysell]262.
8.1.5 Procedures for Mesh Generation
In all the methods above the stretched cells close to the boundary have been generated first and the
isotropy tetrahedra have been generated in a second step. The methodology has been depicted in
Figure 8.3 using a Centrum™ generated hybrid grid263. [Ito & Nakahashi]264 and [Karman]265 use a
254 Tysell, L. 2009, CAD Geometry Import for Grid Generation. Proceedings of the 11th ISGG Conference on
Numerical Grid Generation, International Society of Grid Generation (ISGG), Montreal, Canada.
255 McMorris, H. & Kallinderis, Y. 1997, Octree-Advancing Front Method for Generation of Unstructured Surface
Technical Reports from Royal Institute of Technology, Department of Mechanics, Stockholm, Sweden, 2010.
257 Aubry, R. & L¨ohner, R., Geration of Viscous Grids with Ridges and Corners. Paper AIAA-2007-3832.
258 Sharow, D., Lou, H. & Baum, J. 2001, Unstructured Navier Stokes Grid Generation at Corners and Ridges. Paper
AIAA-2001-2600.
259 Soni, B., Thompson, D., Koomullil, R. & Thornburg, H. 2001, GGTK: A Tool Kit for Static and Dynamic Geometry-
Grid Generation, International Society of Grid Generation (ISGG), Heraklion, Crete, Greece.
261 Steinbrenner, J. & Abelanet, J. 2007, Anisotropic Tetrahedral Meshing Based on Surface Deformation
Technical Reports from Royal Institute of Technology, Department of Mechanics, Stockholm, Sweden, 2010.
263 This geometry represents airflow over a transonic transport aircraft, complete with a pylon and flow
through nacelle attached to the wing. A hybrid mesh was generated using CENTAUR containing both prisms in
the boundary layer and tetrahedra in the interior. A symmetry plane was used so that only half the geometry
needed to be modeled.
264 Ito, Y. & Nakahashi, K., Improvements in the Reliability and Quality Un-structured Hybrid Mesh Generation.
different strategy where a grid consisting of isotropy tetrahedra has been generated first. In a second
step this grid is pushed away from the boundary and the gap is filled with prismatic cells. A drawback
with this method is that it will likely be a jump in cell size at the interface between the prismatic layer
and the isotropy tetrahedra, since it is difficult to push the initial tetrahedra far enough without
inversion to get a sufficient height of the prismatic layer. [L¨ohner & Cebral]266 present a method
where an isotropic tetrahedral grid is generated first and then refined with stretched tetrahedra
close to the boundary. The use of prismatic layers along imaginary surfaces in the flow field in order
to catch shocks is presented in [Shih et al.]267. Pointwise™ is advising following step for their mesh
generation, specifically airplane geometry268. The procedure can be summarized as:
➢ Advancing Front Orthogonal surface mesh generated automatically on watertight solid model
➢ Generates boundary aligned isotropic triangles
➢ Anisotropic triangles grown off slat, wing, flap LE using T-Rex
➢ Structured diagonalized meshes created on slat, wing, flap TE surfaces to comply with TE
point req.
➢ Spacing’s applied to key locations in surface mesh.
➢ Chord wise/span wise number of grid points increased to reduce area ratio/aspect ratio to
reasonable levels.
➢ Manually correct problem areas that were geometry limited.
➢ Grow anisotropic tetrahedral off surface mesh based on refinement level growth rate and
wall spacing.
➢ Insert equilateral tetrahedral into remainder of volume using modified Delaunay method.
266 L¨ohner, R. & Cebral, J. 2000, Generation of Non-Isotropic Unstructured Grids via Directional Enrichment.
International Journal for Numerical Methods in Engineering, 49 (1-2), pp. 219-232.
267 Shih, A., Ito, Y., Koomullil, R., Kasmai, T., Jankun-Kelly, M., Thompsson, D. & Brewer, W. 2007, Solution
Adaptive Mesh Generation using Feature Aligned Embedded Surface Meshes. Paper AIAA-2007-0558.
268 Carolyn D. Woeber , “Pointwise Unstructured and Hybrid Mesh Contributions to GMGW-1”, 1st Geometry and
269 Batina, J., Unsteady Euler Algorithm with Unstructured Dynamic Mesh for Complex-Aircraft Aerodynamic
Analysis. AIAA Journal, 29 (3), pp. 327-333, 1991.
270 Farhat, C., Degand, C., Koobus, B. & Lesoinne, M. 1998, An Improved Method of Spring Analogy for Dynamic
Paper AIAA-2006-0885.
272 Martineau, D. & Georgala, J., A Mesh Movement Algorithm for High Quality Generalized Meshes. AIAA-2004.
273 Samareh, J. 2002, Application of Quaternions for Mesh Deformation. Proceedings of the 8th International
Conference on Numerical Grid Generation in Computational Field Simulations, pp. 47-57, International Society
of Grid Generation (ISGG), Honolulu, Hawaii, USA.
274 Tysell, L. 2002, Grid Deformation of 3D Hybrid Grids. Proceedings of the 8th International Conference on
Numerical Grid Generation in Computational Field Simulations, pp. 265-274, International Society of Grid
Generation (ISGG), Honolulu, Hawaii, USA.
275 Nielsen, E. & Anderson, K. 2001, Recent Improvements in Aerodynamic Design Optimization on Unstructured
method is the use of radial basis functions, see [Jakobsson & Amignon]278 and [Allen & Rendall]279.
8.1.7 Adaptation
In [Peraire et al.]280 the use of the Hessian281, which measures the second derivatives of the flow
quantities, is introduced in order to generate directionally adapted grids. One drawback with the use
of the Hessian is that the grid can only be adapted to one selected flow quantity. It is in most cases
not possible to find one single flow quantity that works for all cases, or even for all regions of one
case. One remedy to this has been presented in [Castro-Diaz et al.]282 by computing the Hessian for
several flow quantities and then compute the combined Hessian by so called metric intersection. This
way to compute the metric intersection is not optimal, since only the combined eigenvalues are
computed, while the eigenvectors are arbitrarily chosen from one of the flow quantities. Thus, the
way to compute the combined metric from several flow quantities presented in [Tysell et al. (1998)]
is done in a more rigorous way, but the drawback may be that only the first derivatives of the flow
quantities are used instead of the second derivatives. Another way to compute the metric intersection
has been presented in [Frey & Alauzet]283. The Hessians can be represented by a set of ellipsoids and
in that paper the intersection is computed by computing the largest ellipsoid inscribed in all
intersected ellipsoids.
A rigorous way to compute this metric intersection has recently been presented in [McKenzie et al.
]284, where each Hessian is introduced successively by transforming the Hessian to a space where
the current transformation is represented by a sphere. In this space the intersection is easy to
compute. Both for first and second derivative adaptive sensors limits of the cell sizes must be set in
regions of flow discontinuities, where the cell size otherwise would become indefinitely small, and in
regions where the flow is varying very slowly, where the cell sizes would become too large. One
drawback by using the Hessian is also that the cell sizes tends to be indefinitely large where the flow
is varying linearly. I has been shown in [Venditti & Darmofal ]285 where they compare results for flow
around a multiple airfoil configuration, that a Hessian based method gives too large cell sizes in
regions of linearly varying flow compared to an adjoint based adaptation method. In the paper they
propose a combination of the two methods, where the cell sizes are taken from the adjoint
computation, whereas the directional stretching are taken from the Hessian. In [Remaki et al. ]286
the metric tensor is computed by taking a weighted sum of the Hessian and the gradient tensor, in
278 Jakobsson, S. & Amignon, O. 2005, Mesh Deformation using Radial Basis Functions for Gradient Based
Aerodynamic Shape Optimization. Technical Report FOI-R-1784-SE, FOI, Swedish Defense Research Agency.
279 Allen, C. & Rendall, T. 2007, Unified Approach to CFD-CSD Interpolation and Mesh Motion using Radial Basis
scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The
Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later
named after him. Hesse originally used the term "functional determinants".
282 Castro-Diaz, M., Hecht, F., Mohammadi, B. & Pironneau, O., Anisotropic Unstructured Mesh Adaption for Flow
Simulations. International Journal for Numerical Methods in Fluids, 25 (4), pp. 475-491, 1997.
283 Frey, P. & Alauzet, F. 2005, Anisotropic Mesh Adaptation for CFD Computations. Computer Methods in Applied
Union. Proceedings of the 11th ISGG Conference on Numerical Grid Generation, International Society of Grid
Generation (ISGG), Montreal, Canada.
285 Venditti, D. & Darmofal, D. 2003, Anisotropic Grid Adaptation for Functional Outputs: Application to Two-
Dimensional Viscous Flows. Journal of Computational Physics, 187 (1), pp. 22-46.
286 Remaki, L., Nadarajah, S. & Habashi, W. 2006, On the a Posteriori Error Estimation in Mesh Adaptation to
order to get better grid resolution also in areas of linearly varying flow quantities. The use of the
Hessian and combinations of grid cell split, merge, swapping and node movement is used in [Xia et
al.]287 and [Dompierre et al.]288 for directional h-adaptation. In these papers the method has been
applied in two dimensions. The same method has been used to present three-dimensional results in
[Park & Darmofal]289. This method is faster than the total re-meshing used in [Peraire et al.]290 and
[Tysell et al.]291 but appears to give grids of less good quality. In [Pirzadeh]292 adaption by re-meshing
is done by only doing local re-meshing, where the grid needs to be adapted. In this way the time for
re-meshing is reduced. Further details can be obtained at [Tysell ]293.
287 Xia, G., Li, D. & Merkle, C., Anisotropic Grid Adaptation on Unstructured Meshes. Paper AIAA-2001.
288 Dompierre, J., Vallet, M., Bourgault, Y., Fortin, M. & Habashi, W. 2002, Anisotropic Mesh Adaptation: Towards
User-independent Mesh-independent and Solver-independent CFD. Part III. Unstructured Meshes. International
Journal for Numerical Methods in Fluids, 39 (8), pp. 675-702.
289 Park, M. & Darmofal, D. 2008, Parallel Anisotropic Tetrahedral Adaptation. Paper AIAA-2008-0917.
290 Peraire, J., Peiro, J. & Morgan, K. 1992 Adaptive Remeshing for Three-Dimensional Compressible Flow
6th International Conference on Numerical Grid Generation in Computational Field Simulations, pp. 391-400,
International Society of Grid Generation (ISGG), Greenwich, UK, 1998.
292 Pirzadeh, S. 2000, A solution-Adaptive Unstructured Grid Method by Grid Subdivision and Local Remeshing,
Technical Reports from Royal Institute of Technology, Department of Mechanics, Stockholm, Sweden, 2010.
294 Roe PL. Error estimates for cell-vertex solutions of the compressible Euler equations. ICASE Report No, 1987.
295 Giles MB. Accuracy of node-based solutions on irregular meshes. Eleventh international conference on
296 Satish Chalasani and David Thompson, “Quality improvements in extruded meshes using topologically
adaptive generalized elements”, Int. J. Num. Meth. Eng 2004.
297 Abaqus/CAE
161
regions with a constant cross-section and a linear sweep path. There are three required parameters
for a bottom-up extruded mesh. As with the sweep method, you choose the Source side that defines
the area on which Abaqus/CAE will create a two-dimensional mesh. You then select the starting and
ending point of a Vector that defines the extrusion direction and can also be used to define the
extrusion distance. Finally, you indicate the Number of layers to define the number of elements
between the source side and the end of the extruded mesh. If you use the vector to define the
extrusion distance, the definition is complete. However, you can Specify a distance or use Project to
target and select a target side to define the extrusion distance. The target side can be selected from
any geometry, mesh, or datum plane in the viewport; it need not be part of the same part instance as
the source. Abaqus/CAE
extrudes the two-
dimensional mesh from the
source side in the direction
of the extrusion vector. If you
select a target side to define
the extrusion
distance, Abaqus/CAE ends
the extruded mesh at the
target side. Figure
8.5 shows the source side
and target side on the left;
the extrusion vector (not
shown) extends from the
Figure 8.5 The Optional Target Side (colored white) is used to Define
center of the rectangular the Extrusion Distance
source side to the center of
the cylinder. The resulting
extruded mesh is an extension of the source side mesh. It closely matches the target side shape, but
no attempt is made to match the node positions of the mesh on the target side.
The Bias ratio parameter (i.e., fill ratio) defines a change in the element thickness between the source
side and the end of an extruded bottom-up mesh in which more than one layer is created. The bias
ratio is the ratio of the thickness of the first layer of elements in the extruded mesh to the thickness
of the last layer of elements. The default bias ratio of 1.0 has equal thickness elements throughout
the extrusion distance, but other algebraic functions such as exponentials can be used to concentrates
the layers at the target or so.
8.3.2 Sweep
The sweep method creates a three-dimensional mesh by moving a two-dimensional mesh along a
sweep path. The sweep method is illustrated in Figure 8.6. You should use the bottom-up sweep
meshing method when the region cross-section changes between the starting and ending sides. To
use the sweep method, you must first choose a Source side that defines the face or faces on
which Abaqus/CAE will create a two-dimensional mesh. The source side can be any combination of
geometric faces, element faces, and two-dimensional elements. You can define the sweep path by
selecting Connecting sides that define the sides of the desired sweep region. If you define connecting
sides, the mesh conforms closely to the geometry or mesh along the selected sides. Alternatively, for
geometry you can select a Target side and specify a Number of layers and allow Abaqus/CAE to
create the sweep path by interpolating between the source and target sides. The Target side is a
single face that Abaqus/CAE uses to end the mesh. The number of layers refers to the number of
element layers that will be placed between the source and the target sides—if you use connecting
sides, the two-dimensional meshes of the connecting sides define the number of element
162
[Moxey, et al.]299 proposed an isoperimetric approach, whereby a mesh containing a valid coarse
discretization comprising of high-order triangular prisms near walls is refined to obtain a finer
prismatic or tetrahedral boundary-layer mesh.
Extrusion Layer
299 D. Moxey∗, M.D. Green, S.J. Sherwin, J. Peir´o, “An isoperimetric approach to high-order curvilinear boundary-
layer meshing”, Computer Methods Applied Mechanics Engineering, 283 (2015) 636–650.
300 Yao Zheng, Zhoufang Xiao, Jianjun Chen, and Jifa Zhang, “Novel Methodology for Viscous-Layer Meshing by
the Boundary Element Method”, AIAA Journal Vol. 56, No. 1, January 2018.
164
can be observed in this region. To evaluate the quality of the generated prismatic elements, the
scaled-aspect-ratio quality measure proposed in the literature was adopted in this study. It has been
reported that this quality measure in effect combines the measures of triangle shapes and edge
orthogonality301.
Figure 8.9 Meshes Generated by a) Proposed Algorithm and b) Leading Commercial Vendor
grids”, Proc. AIAA CFD Conf., 12th, San Diego. AIAA Pap. 95-1705-CP
304 Baum JD, Luo H, L¨ohner R., ”A new ALE adaptive unstructured methodology for the simulation of moving
flow variables. Conservative criteria (i.e. over-refining) are often employed to compensate for the
inability to accurately characterize the true solution error. Because adaptive meshing results in
different mesh topologies for each simulation, even when the geometry of the problem is unchanged,
parametric studies (typically used in design processes) are complicated by the requirement to
distinguish between grid-induced and physical solution variations. Nevertheless, for problems with
disparate length scales, adaptive meshing is often indispensable for resolving small flow features,
and their full potential awaits the development of more well-founded adaptive criteria.
Mesh adaptation, often referred to as Adaptive Mesh Refinement (AMR), refers to the modification
of an existing mesh so as to accurately capture flow features. Generally, the goal of these
modifications is to improve resolution of flow features without excessive increase in computational
effort. We shall discuss in next chapter on some of the concepts important in mesh adaptation. Mesh
adaptation strategies can usually be classified as one of three general types: R-Refinement, H-
Refinement, or P-Refinement as depicted in Figure 8.10.
8.4.1 R-Refinement
This is the modification of mesh resolution without changing the number of nodes or cells
present in a mesh or the connectivity of a mesh. The increase in resolution is made by moving the
grid points into regions of activity, which results in a greater clustering of points in those regions.
The movement of the nodes can be controlled in various ways. On common technique is to treat the
mesh as if it is an elastic solid and solve a system equations (subject to some forcing) that deforms
the original mesh. Care must be taken, however, that no problems due to excessive grid skewness
arise. (See Figure 8.10 (a)).
8.4.2 H-Refinement
The modification of mesh resolution by changing the mesh connectivity. Depending upon the
technique used, this may not result in a change in the overall number of grid cells or grid points. The
simplest strategy for this type of refinement subdivides cells, while more complex procedures may
insert or remove nodes (or cells) to change the overall mesh topology. In the subdivision case, every
"parent cell" is divided into "child cells". The choice of which cells are to be divided is addressed in
Figure 8.10 (b). For every parent cell, a new point is added on each face. For 2D quadrilaterals, a
new point is added at the cell centroid also. On joining these points, we get 4 new "child cells”, (3 in
tetrahedral). Thus, every quad parent gives rise to four new off springs. The advantage of such a
procedure is that the overall mesh topology remains the same (with the child cells taking the place
of the parent cell in the connectivity arrangement). It is easy to see that the subdivision process
increases both the number of points and the number of cells. An additional point to be noted is that
this type of mesh adaptation can lead to what are called "hanging nodes." In 2D, this happens when
one of the cells sharing a face is divided and the other is not, as shown in Figure 8.10 (c). For two
166
quad cells, one cell is divided into four quads and other remains as it is. The highlighted node is the
hanging node. This leads to a node on the face between the two cells which does not belong (properly)
to both of the cells. The node "hangs" on the face, and one of the cells becomes an arbitrary
polyhedron. In the above case, the topology seemingly remains same, but the right (un-divided) cell
actually has five faces. Figure 8.11 show a mesh modeling supersonic flow around a space shuttle
in which h-method adaptivity has been employed to optimize the mesh structure to produce accurate
simulation of flow features important in assessing the performance of the design such as the profiles
of pressure distribution shown306.
Figure 8.11 An H-refinement mesh about a Shuttle-like body (left) and Computed CP (right)
306 The National Academies Press, “Research Directions In Computational Mechanics”, 1991.
167
8.4.3 P-Refinement
A very popular tool in Finite Element Modelling (FEM) rather than in Finite Volume Modelling
(FVM), it achieves resolution by increasing the order of accuracy of the polynomial in each element
(or cell). In AMR, the selection of "parent cells" to be divided is made on the basis of regions where
there is appreciable flow activity. It is well known that in compressible flows, the major features
would include Shocks, Boundary Layers and Shear Layers, Vortex flows, Mach Stem, Expansion
fans and the like. It can also be seen that each feature has some "physical signature" that can be
numerically exploited. For e.g., Shocks always involve a density/pressure jump and can be detected
by their gradients, whereas boundary layers are always associated with rotationally and hence can
be detected using curl of velocity. In compressible flows, the velocity divergence, which is a measure
of compressibility is also a good choice for shocks and expansions. These sensing parameters which
can indicate regions of flow where there are activity are referred to as Error Indicators and are very
popular in AMR for CFD. The spectral order p of the approximation is raised or lowered to control
error. In finite element methods or boundary element methods, the order p corresponds to the
degree of the polynomial shape function used over an element. Just as refinement is possible by Error
Indicators as mentioned above, certain other issues also assume relevance. Error Indicators do detect
regions for refinement, they do not actually tell if the resolution is good enough at any given time. In
fact the issue is very severe for shocks, the smaller the cell, the higher the gradient and the indicator
would keep on picking the region, unless a threshold value is provided. Further, many users make
use of conservative values while refining a domain and generally end up in refining more than the
essential portion of the grid, though not the complete domain. These refined regions are unnecessary
and are in strictest sense, contribute to unnecessary computational effort. It is at this juncture, that
reliable and reasonable measure of cell error become necessary to do the process of "coarsening",
which would reduce the above-said unnecessary refinement, with a view towards generating an
"optimal mesh". The measures are given by sensors referred to as Error Estimators, literature on
which is in abundance in FEM, though these are very rare in FVM. Control of the refinement and/or
coarsening via the error indicators is often undertaken by using either the 'solution gradientt' or
'solution curvature'. Hence the refinement variable coupled with the refinement method and its
limits all need to be considered when applying mesh adaptation.
168
307 Cavallo,
P.A., Sinha, N., and Feldman, G.M., “Parallel Unstructured Mesh Adaptation For Transient Moving Body
And Aeropropulsive Applications”, Combustion Research and Flow Technology, Inc. (CRAFT Tech), Pipersville, PA.
308 M. Turner, D. Moxeya, J. Peir´, “Automatic mesh sizing specification of complex three dimensional domains
be placed on the longest edge to collapse, in order to avoid the creation of excessively large elements.
In the case of collapsing to M01, the algorithm proceeds by deleting all the mesh regions connected to
M02, creating a polyhedral cavity within the mesh. The edge collapsing is then completed connecting
all the faces of the cavity to M01 in order to form the new mesh regions. The procedure is illustrated
in Figure 8.13.
8.5.1.1 Case Study - Numerical Testing for Engine Nacelle
In this example we consider the model of an engine inlet with a center body. The initial CFD mesh of
119,861 tetrahedral was generated with the Finite Octree mesh generator309. A 4-level multigrid was
then generated by means of uniform coarsening marking all the edges for DE-refinement, obtaining
coarser levels of 24,619, 6,477 and 1,819 tetrahedral, respectively. We set a limit of 1600 to the
largest dihedral angle generated during coarsening, while we targeted for optimization all the
regions with at least ne angle above 1450. The coarser meshes are presented in Figure 8.14 (a),
(b), (c) and (d). As opposed to the refinement procedure, an improvement of the mesh quality
cannot be expected % 1 since coarsening introduces constraints by deleting degrees of freedom.
Nonetheless, the coarsening procedure was able to reduce the number of elements by a factor or
approximately 74 in three levels. The final mesh has 1,710 elements, where 96% of the them have a
309 Hierarchy of successively coarser meshes obtained by uniform coarsening for the nacelle model. (a) Level
1: initial base mesh (119,861 tetrahedra); (b) Level 2: first DE-refined mesh (24,619 tetrahedra); (c) I.evel 3:
second DE-refined mesh (6,477 tetrahedra); (d) Level 4: third DE-refined mesh (1,819 tetrahedra)
170
largest dihedral angle below the value of 1450. The usage of mesh (d) for numerical computations is
clearly limited, since it gives a poor discretization of the complex curved model, but the goal of this
example is mainly to show that we are able to control the quality of the meshes even for a large
coarsening ratio. Selection or the right coarse mesh for a specific problem depends strongly on the
type of analysis to perform and it is not investigated here. The effect of locally improving the mash
using the re-triangulation procedures during coarsening was investigated. The initial mesh was
redefined six times without optimization after each edge collapsing. The final coarse mesh is denoted
by 6,512 elements. In this case, for facilitating the coarsening process, the constraint on the largest
dihedral angle was relaxed from 1600 to 1750.
8.5.1.2 Coarsening With/Without Local Re-Triangulation
The ratio of coarsening is given in Figure 8.15. The diagram clearly indicates that coarsening
without local retriangulation
is not able to produced more
than one or two coarser
meshes. In fact, due to the
increased number of badly
shaped elements after each
coarsening, constraints are
introduced in the process and
most of the attempted edge
collapsing fail. In contrast,
coarsening using local re-
triangulation after each edge
collapsing is able to maintain
nearly constant, slightly
increasing coarsening ratio.
This gives the possibility to
use the coarsening procedure
to create a coarse mesh as
coarse as it is necessary.
Figure 8.15 Coarsening ratio for coarsening with and without local
8.5.2 Refinement of Re-triangulation.
Triangulation Region
The refinement scheme
adopted in this work is also edge based310, in the sense that edges marked for refinement are split.
The algorithm implements all possible subdivision patterns corresponding to all possible
configurations or marked edges, to allow the maximum flexibility in how mesh refinement is
accomplished. A limit can be placed on the shortest edge to split, in order to avoid excessive
refinement. In the presence of curved newly generated vertices classified on model boundary must
be properly placed on the true geometric boundary. This snapping procedure is critical, in the sense
that it the mechanism through which refinement of a given mesh improves the geometric
approximation that the mash gives of the geometry. This operation relies on the interaction of the
refinement procedure with the geometric modeler storing the geometric information, and the
classification information of the mesh entities. The snapping of a undefined vertex can produce
invalid regions of negative volume or of poor quality. In this case, the re-triangulation procedure
explained in the following is applied and repeated until the vertex can be successfully snapped. For
supporting the computation of the restriction and prolongation operators, a double link is stored
310H.L. De Cougny and MS. Shephard, 'Local modification tools for adaptive mesh enrichment and their
parallelization', Scientific Computation Research Center, RPI, submitted to Comp. Meth Appl. Mech.
171
from the vertex to the edge and back, together with the value of the split location along the edge. This
is realized "on the fly" during refinement of each marked edge. Clearly, not even a local search is
needed in this case. Please consult [Bottasso et al.]311 for further info.
8.5.2.1 Local Re-Triangulation
Local re-triangulation algorithms are an important aspect of any automated mesh modification
procedure, their goal being the control and the improvement of the quality of a mesh with respect to
a given criterion. The optimizing procedures implemented in this work are edge removal and multi-
face removal, which do not change the number of vertices, and edge collapsing and splitting of one
or more edges, faces or a region, which remove or add vertices Edge removal deletes an edge from a
mesh by introducing one or more faces, depending on the number of regions surrounding the edge.
Face or mufti-face removal represents the dual operation, removing one or more faces by introducing
a new edge. Figure 8.16 gives an example of these swaps for the simplest configuration, a 3 to 2
swap of the three elements [M01, M02, M0α, M0β] , [M02, M03, M0α, M0β] and [M03, M01, M0α, M0β]
surrounding edge [M0α , M0β]. The 3 to 2 swap is performed by introducing a new face [M01, M02, M03]
and by deleting the edge {M0α, M0β], yielding two new elements and [M01, M02, M03, M0β]. The reverse
operation (2 to 3 swap) is given by deleting the face [M01 , M02; , M03] and introducing the edge [M0α
M0β]. As previously explained, edge collapsing removes an edge by merging two vertices and
removing all regions connected to that edge.
The splitting procedures introduce new vertices on one or more edges of a region. The new
configuration is then given by applying the corresponding refinement subdivision pattern. The
procedures are region based, in the sense that the algorithm tries to improve all regions which violate
a given mesh quality criterion. Given the impact that large dihedral angles usually have on the
condition number of the discrete problem that approximates the set of PDE's to be solved, we
typically use dihedral angles as optimization targets. As a first. step, the dihedral angles of all
311Carlo L. Bottasso, Ottmar Klaas, Mark S. Shephard, “Data Structures and Mesh Modification Tools for
Unstructured Multigrid Adaptive Techniques”, Article in Engineering With Computers · January 1998.
172
elements considered for optimization are calculated. Each element violating a user defined threshold
value is put into a linear list. Depending on the configuration of each element in the list (number of
dihedral angles above the threshold value, topological or geometrical constraints, etc.), a suitable
subset of the above mentioned optimization procedures is applied to eliminate that element in favor
of improved elements.
The procedures might fail for a specific element if the resulting configuration is topologically or
geometrically not valid, or if they lead to a degradation of the quality of any element involved in that
local re-triangulation. In this case, or if the element is improved but the largest dihedral angle is still
above the threshold value, the element is considered for improvement in a second pass, after all
elements have been processed. Since the neighborhood of the elements that failed in the first pass
may have been modified, it is possible that they can be fixed in a second pass. The procedure is
repeated until a given threshold value is reached or no further improvement can be achieved. The
local re-triangulation algorithm is used to improve the meshes produced by the refinement and
coarsening procedures. Since the refinement is an edge based operation that takes into account all
passible subdivision patterns, refining an element is a localized procedure that does not affect the
neighborhood of that element, and consequently the local re-triangulation can be performed after
the refinement procedure is completed.
The situation is different when coarsening is considered. The coarsening procedure itself tends to
give a mesh of poor quality, since collapsing of an edge has a strong costly negative impact on the
dihedral angles of the surrounding elements. prevent losing control of the mesh quality, especially
when multiple coarsening steps are performed, it. is necessary to introduce a threshold value to be
satisfied by the largest dihedral angle in each of the newly generated elements. However, this usually
represents a strong constraint and prevents a large coarsening ratio. It is therefore advantageous to
improve the neighborhood of a to-be-removed element before picking the next edge for collapsing.
This can be done by sending a list of regions connected to the target vertex of the edge collapsing to
the local re-triangulation procedure, after each edge collapsing is performed. Such a locally improved
mesh makes the next edge collapsing more likely to lead to acceptable elements. Local procedures,
such as the ones here considered can only lead to optima with respect to the quality of the
triangulation. Nonetheless, these tools have been proven to be valuable in controlling the
degradation of the mesh quality during its adaptive modification. They also find application in the
context of curved model boundaries, when snapping of newly created vertices can create invalid or
poorly shaped regions. In this case, local re-triangulation tools can be used for eliminating those
regions that prevent snapping312 .
8.5.3 Refinement of Hexahedral Region (Near Wall)
The refinement of the hexahedral region of the grid is accomplished using the pattern formation
procedure of [Biswas and Strawn]313, which employs a parent-child data structure to split the cells.
Each hexahedral cell is then split according to a pattern, to generate 2:1, 4:1, or 8:1 sub-divisions.
This cell subdivision creates buffer cells, which are tetrahedral, pyramid, or prismatic elements used
to transition between different levels of hexahedral refinement, thus ensuring a conforming mesh
with no hanging nodes. An initial point spacing ρ0 is defined for each vertex as the average edge
length for all edges incident to the node. A larger grid spacing produces an iterative coarsening of
the mesh. Using the mesh deformation measure, we modify the local grid spacing to be
312 H.L. De Cougny and MS. Shephard, 'Local modification tools for adaptive mesh enrichment and their
parallelization', Scientific Computation Research Center, RPI, 'Eoy, NY, submitted to Comp. Meth Appl. Mech.
313 Biswas, R., and Strawn, R.C., "Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems", NAS Technical
ρ0 σ
ρ= where τ = min Eq. 8.1
τ σ max
And τ represent the dilatation of the tetrahedron in each of three principal directions, and are
equivalent to the singular values of the transformation matrix. Note that the more deformed the
cell, the larger the prescribed spacing, and hence an increased amount of coarsening will be
performed. This improves the likelihood of the distorted cell being removed. Conversely, the
enrichment procedures are invoked by specifying a smaller spacing. After the coarsening phase, an
appropriate gradation of cell size is restored by solving a Laplace equation for ρ, using the boundary
mesh spacing as Dirichlet boundary conditions. An approximate solution is obtained by summing the
difference in the point spacing for all edges N incident to the node using a relaxation technique.
ρ n +1 n ε N n n
N k =1
(
= ρ + ρk − ρ ) Eq. 8.2
Prescribing new point spacing also drives solution-based coarsening and refinement. A variation on
the solution error estimate developed in two dimensions by314 has been implemented in three
dimensions for arbitrary mesh topologies. The method is based on forming a higher order
approximation of the solution at each mesh point using a least squares approach. The difference
between the higher order reconstruction from incident nodes and the current solution forms the
error measurement. If the current mesh is sufficiently fine to support the spatial variation in the
solution, the estimated error will be low, allowing coarsening to take place. Conversely, a high degree
of error indicates additional refinement is needed.
8.5.3.1 Improvement to Near-Field Grid Generation Procedure (Hexahedral)
The successful drag prediction workshop series set the focus of its fourth gathering (DPW4) in the
blind prediction of drag and moment coefficients of the NASA common research model (CRM)
transonic wing-body-tail configuration315. One of the main objectives of DPW4 is to evaluate the
performance of state-of-the-art Navier-Stokes codes, thus this study documents some of the steps
undertaken at the Institute of Aerodynamics and Flow Technology of DLR, to prepare the
contribution to the DPW4. To identify possible CFD areas needing additional research and
development, both standard procedures were used in this study and advanced methodologies, such
as new grid generation methods and advanced turbulence models. The input values required by the
advancing layer grid generation process, first layer spacing and expansion ratio, have to be chosen
wisely. It is important to resolve the same physical region on the various grid levels with the same
element types. A similar near-field extent normal to the walls, guarantees that the transition location
from hexahedral/prismatic elements to tetrahedral is similar between the grid levels. Having the
element type transitions in the same physical region allows to capture in a self-similar way, on all
grid levels, eventual discretization errors. The relations between the grid levels in terms of first layer
spacing and number of wall-normal layers should follow the scaling factor given above. Given the
requirement for a self-similar total layer thickness, scaled first layer spacing and scaled total amount
314 Ilinca,C., Zhang, X.D., Trépanier, J.-Y., and Camarero, R., “A Comparison of Three Error Estimation Techniques
for Finite-Volume Solutions of Compressible Flows, “Computer Methods in Applied Mechanics and Engineering,
Vol. 189, pp. 1277-1294, 2000.
315 Simone Crippa, “Application of Novel Hybrid Mesh Generation Methodologies for Improved Unstructured CFD
Simulations”, 28th AIAA Applied Aerodynamics Conference - CFD Drag Prediction Workshop Results, 2010.
174
of layers, leaves only the expansion ratio as variable to be determined. The geometric series for the
total layer thickness (H) is
n
1 − qn+1
n
H = ∑ a. q = a
1−q
i=1
Eq. 8.3
where the total number of layers is N = n + 1, the expansion ratio is q and the first layer spacing is a.
Keeping the total layer thickness between two grid levels constant (H1 = H2), results in
1 − 𝑞1𝑛+1 1 − 𝑞2𝑛+1
a1 = a2
1 − q1 1 − q2
Eq. 8.4
Hereby the relation between a1 and a2, as well as N1 and N2, is set by the scaling factor ∛3; for
example, with grid level 2 being finer than grid level 1, follows a2 = a1/∛3 and N2 = N1 / ∛ 3. Starting
with a sensible value for the expansion ratio q1 and a total amount of layers N1, the only unknown in
Eq. 8.4 is q2, which can be computed iteratively. For the DPW4 grid-convergence family, the values
for the coarse and _ne levels are derived from the medium grid. The first layer spacings given in the
gridding guidelines are used, as the scaling factor of 1.5 is sufficiently close to ∛3. The resulting values
for the near-field mesh are summarized in Table 8.1.
Note that for full consistency, the number of wall-normal layers with constant spacing should also be
scaled by ∛ 3, but neglecting this was not deemed of primary influence to the results. Furthermore,
note that the expansion ratios of fulfill the requirement given in the gridding guidelines only for the
medium and fine grids. The expansion ratio of the coarse grid is larger than the defined, maximal
Figure 8.17 Comparison of Coarse, Medium and Fine Grids: lateral view on fore-body with Symmetry.
175
value of 1.25. A comparison of the three final grids is shown in Figure 8.17, where the self-similar
relation between the three levels is visible in the highlighted region. In wall-tangential direction, the
factor of approx. 1.5 can be recognized by the cascade of 2, 3 and 4.5 quadrilateral elements. A similar
wall-normal total layer extent is also recognizable.
Figure 8.18 Local Dissipation Error of Drag Coefficient on field cut-plane at x =1400 inch;
isometric/downstream view
A simple solution to this problem is not known, thus this problematic region was not fully fixed for
the contribution to DPW4. The near-field layer contraction in these concave regions cannot be
completely excluded from the Solar grid generation process, thus a solution is sought on the solver
side. Since recently, the TAU code has the capability to compute on chimera (overset) grids with
overlapping viscous boundaries. To make use of this capability, a fully hexahedral grid was generated
with a C-H topology around the complete wing airfoil and some of the wake at the wing root. The
grid spacing in the overlap/interpolation region is similar to the medium Solar grid, but the
resolution at the wing-body junction is improved due to the chosen H-topology, as opposed to an O-
grid topology. The five million elements, hexahedral grid is referred to as SolarChimera5, whereas
the initial medium grid plainly Solar. A comparison between the Solar and SolarChimera5 grids at
the wing-body junction is shown in Figure 8.19.
176
Figure 8.19 Comparison of SolarChimera5 and Solar Grid at x = 1454 inch plane; Viscous Wall Surface
in Dark
grey, _eld cut in white.
316 Jeffrey Slotnick, “Meshing Challenges for Applied Aerodynamics”, Boeing, January 2019.
177
Emerging methods and procedures in development to build meshes that adapt to the flow solution
with considerable promise to improve accuracy for a given number of points, reduce engineer time,
and lower level of user expertise needed. They are several algorithms being evaluated for speed,
robustness, effectiveness, with issues like surface geometry interface, robustness, mesh growth,
strength of solver on irregular grids, etc. This is not generally in production use, but not many years
away. In GMGW2 workshop [Slotnick] raised the following issues:
➢Fairly well established procedures in place to build fixed meshes around complex geometry
➢Various types (unstructured, structured-overset, hybrid, etc.)
➢Various toolsets (with varying degrees of automation)
➢Resolution set by educated guess and best practices (developed from growing body of data
from internal/external studies)
➢ Typically one mesh used for range of flight conditions
➢ Mesh refinement study seldom done in practice
➢ Emerging methods and procedures in development to build meshes that adapt to the flow
solution
➢ Considerable promise to improve accuracy for a given number of points, reduce engineer
time, and lower level of user expertise needed
➢ Several algorithms being evaluated for speed, robustness, effectiveness
➢ Issues: surface geometry interface, robustness, mesh growth, strength of solver on irregular
grids, etc.
➢ Generally not in production use, but not many years away Fixed grids continue to be the
workhorse for CFD modeling of realistic geometry
With challenges remaining as
➢ Handling geometric complexity (large range of scales, components in close proximity, mesh
control/quality in tight regions, etc.)
➢ Resolving pertinent flow features (unknown location and importance, varies with run flow
conditions, etc.)
➢ Improving speed and robustness
➢ Better understanding of the dependency of grid resolution on physical modeling
➢ Better understanding of the switch from RANS to turbulence resolving simulations
8.7.1 Experience
From HiLiftPW-3, oil flow images from testing of the JAXA JSM configuration near stall conditions
identified regions of separated flow behind several slat brackets317. Many RANS simulations
predicted much larger “pizza slice” shaped separation patterns appearing at differing angles-of-
attack, with high sensitivity to turbulence models and mesh resolution, and angle of attack. (see
Figure 8.20).
8.7.2 Observations
➢ Current mesh resolution for high-lift CFD (as utilized in HiLiftPW-3) appears insufficient to
(a) accurately assess physical modeling errors, and (b) to clearly identify the better
turbulence model for RANS simulations for full configurations.
Y. Ito, M. Murayama, Y. Yokokawa & K. Yamamoto (JAXA) K. Tanaka & T. Hirai (Ryoyu Systems) H. Yasuda
317
(KHI), A. Tajima (Kawaju Gifu Engineering) & A. Ochi (KHI), “Japan Aerospace Exploration Agency’s & Kawasaki
Heavy Industries’ Contribution to HiLiftPW-3”, AIAA SciTech 2018, Kissimmee, FL, January 10, 2018.
178
➢ Need a systematic study of mesh resolution/convergence (fixed grid, adaptive) using solvers
that can utilize a wide range of turbulence modeling (RANS, Hybrid RANS/LES, WMLES, etc.)
on key test cases to clearly isolate numerical vs. physical modeling errors.
➢ The extruded, single-slat-bracket periodic model problem appears relevant, but the question
of whether the “pizza slices” are an artifact of insufficient numerical resolution has not been
answered (as they seem to be a robust feature, across codes and models), and turbulence-
resolving simulations have not begun.
Vangelis Skaperdas, and Neil Ashton, “Development of high-quality hybrid unstructured meshes for the
318
grids, similar to the tools provided in CD-Adapco®. It is produced for the 1st AIAA Geometry and
Meshing Workshop and 3rd AIAA High-Lift Workshop. Particular focus is made on the process to
generate suitable grids for
various CFD codes including CRM Common Research Model
OpenFOAM and Star-CCM+. GMGW Geometry and Mesh Generation Workshop
Some of the Abbreviations is HLPW High Lift Prediction Work
been provided in Table 9.1 JAXA Japan Aerospace Exploration Agency
for clarity.
JSM JAXA high-lift configuration Standard Model
9.1.1 Geometry and Mesh MAC Mean Aerodynamic Chord
Generation Background STEP Standard for the Exchange of Product
The AIAA Drag Prediction and
High Lift Prediction (DPHLP) Table 9.1 Abbreviations
provide an opportunity for
engineers in the aerospace sector to present and exchange information on the latest CFD methods
and tools and to directly compare these methods on open-source geometries. The 1st Geometry and
Mesh Generation Workshop took place under the umbrella of the 3rd High Lift Prediction Workshop.
It was the first of its kind to focus specifically on the details of preparing high fidelity mesh models
for CFD simulations. The aim of
was to assess the current state of
the art in geometry pre-
processing & mesh generation
technology and to bring together
meshing specialists to discuss
challenges and possible solutions.
The models studied were NASA’s
Common Research Model and
JAXA’s high-lift configuration
Standard Model. The JSM model
was not studied in the workshop
but given its inclusion in HLPW-3,
the same mesh procedure was
applied to both geometries. Both
geometries are available in high
lift configuration with slats and Figure 9.1 JSM Model with Engine Nacelle
flaps extracted while the JSM
model is also available with an
optional engine nacelle and pylon.
The main goal for participating in HLPW-3 was to assess the open-source CFD code OpenFOAM.
Many aerospace specific CFD codes are restricted for national security reasons e.g., NASA CFD codes
are typically not available to researchers in the UK or Greece. Whilst a number of commercial CFD
codes are routinely used for industrial aerospace simulations, the ability to implement custom
turbulence models, numerical schemes and algorithms means that these are not ideal for research
and collaboration. Open source CFD codes like OpenFOAM, SU2 and Saturne have grown in
popularity in recent years as a growing movement of international collaboration that is improved by
the ease of sharing . Whilst OpenFOAM has its own mesh generation utility; SnappyHexMesh, a
Cartesian-prismatic unstructured generation tool, the experience of the authors have shown that it
is not suitable for low y+ grids and the region between the prismatic and Cartesian is often subject to
severe non-orthogonality and large cell size jumps. For this reason an alternative mesh generator is
used, which is capable of generating high-quality grids that represent the kind of unstructured grids
180
that are typically used by the aerospace industry; ANSA® 17.1, a pre-processor from BETA-CAE
System. (see Figure 9.1).
9.1.2 Geometry Handling
The geometries for the CRM and JSM high-lift models were downloaded from the HLPW-3 website319.
A multitude of CAD file formats were available and for this project the STEP format was selected for
both CRM and JSM models.
9.1.2.1 The CRM Model
The STEP file of the HL-CRM model is in inches. It has a MAC of 275.8 inches and it represents a full
scale aero plane model. No clean-up was required as the geometry had already been cleaned. Based
on the GMGW/HLPW-3 meshing guidelines document, a hemi-spherical domain, suitable for
imposition of far field boundary conditions, with a radius of 100 times MAC was created in ANSA and
connected to the half symmetric airplane model. The raw CAD model was separated only in two
zones, the slat and the whole remaining model. In order to facilitate meshing and pre-processing, we
separated the model in 17 zones as revealed in Figure 9.2. Two versions of the CRM model are
available320. One where the inboard flap is unconnected referred to as “gapped”) and one where the
gaps between the inboard flap and outboard flap and main fuselage were sealed (referred to as
“sealed”). In the latter case the worst proximity areas are removed, facilitating layers generation.
9.1.2.2 The JSM Model
The JSM model is designed in mm and it represents the actual wind tunnel model with a MAC of 529
mm. Geometry was read into ANSA without any topological problems. A hemi-spherical domain was
created with a radius of 100 times MAC. The model was separated in ANSA in 13 zones to facilitate
meshing and pre-processing as shown in Figure 9.3. Similarly to the HL-CRM model, the JSM model
is available in two variants, without engine nacelle, as well as with engine nacelle, as shown in Figure
9.3 using the HLPW-3 notation for cases. The geometry of the JSM model has a particularity, which
the HL CRM model does not have, that would eventually lead to problems in the generation of
layers321. Those areas can be easily identified in ANSA through a quick test layers generation run for
one layer. Such areas are identified, marked and can optionally be excluded from layers generation,
although for the case of this study we wanted to avoid any area of the model without layers, as that
would result to solution instability and error. Three such areas were found in the JSM geometry as
highlighted in Figure 9.4. It was therefore decided to perform some small local geometry
modifications322. The size of these geometrical additions is limited to around one or two local
element lengths so as not to cause a significant disturbance to the flow field. It is believed that the
benefits of allowing for good quality layer generation in these regions surpass any side-effects from
deviating from the original CAD geometry.
Figure 9.3 Computational Domain and Separation of Zones of the JSM Model with Engine Nacelle
Figure 9.4 Three Locations of Problematic Areas of the JSM Geometry for the Generation of Boundary
Layers
321 Vangelis Skaperdas, and Neil Ashton, “Development of high-quality hybrid unstructured meshes for the
GMGW-1 workshop using ANSA”, AIAA, January 2018.
322 Vangelis Skaperdas, and Neil Ashton, “Development of high-quality hybrid unstructured meshes for the
automation, since for every new design variant, the sessions can be populated automatically. One of
the advantages of Batch Mesh tool is the fact that a scenario can be defined once, saved and then run
several times for every new geometry, ensuring automation and mesh consistency for every variant.
The CFD meshing algorithm in ANSA creates a high quality surface mesh, controlled by the following
settings for each session of the Batch Mesh:
In addition to these mesh controls, the user can create Size Boxes to limit the maximum length of the
surface and the volume mesh in different areas. Figure 9.5 displays the Batch Mesh surface meshing
scenario that contains eight sessions, each with different zone contents and mesh specifications for
the Medium JSM model. Ten Size Boxes were used, each one with a larger maximum length limit the
further away of the aircraft. Size Boxes case be cylindrical or hexahedral and can also be manipulated
by the user to take various curved forms aligned to the flow field where necessary. The only feature
currently missing from Batch Mesh tool is the creation of anisotropic mesh, usually at the leading and
trailing edges. The reason for this is that anisotropic mesh is not used in the automotive industry
where ANSA usage was built on, while if it offers a great advantage for aerospace meshing. The main
difference between these two industries with respect to meshing, is that in the aerospace industry
Figure 9.5 Batch Mesh Setup for the JSM Model with Size Boxes for Local Mesh Control
the dimensions of the wings are much larger, while they require very fine curvature refinement. The
flow gradients are considerably larger in the chord wise direction than in the span wise direction and
as a result anisotropic mesh provides the most efficient refinement method. For the same level of
curvature refinement on a typical wing geometry, an isotropic mesh may require at least three times
more elements than an equivalent anisotropic mesh. In addition, in the aerospace industry the total
height of the boundary layer elements is considerably larger than that used in the automotive
industry.
183
In Figure 9.6 the advantage of anisotropic meshing is obvious as in highly convex areas like leading
and trailing edges, starting from a span wise anisotropy allows the layer elements to improve in
quality with every new step. In the end, the top cap of the layers is perfectly isotropic and this is the
best basis for the remaining pyramid and tetra meshing to follow. In contrast, when starting from an
isotropic mesh, the mesh quality of the layers deteriorates with each step. For the case of the isotropic
surface mesh, the top cap does not have a good quality and this makes the remaining volume meshing
process harder. Therefore, the surface meshing of all the models was performed with the following
combination of manual and Batch Mesh automatic operations:
➢ Identification and manual meshing of all trailing edges with map quad mesh of specific rows
of elements.
➢ Automatic Batch Meshing of all remaining surfaces of the model.
➢ Manual imprint of anisotropic mesh patterns away from the trailing edges and along all
leading edges.
Note how the anisotropic mesh dies out near the ends with the span wise imposed nodal
distribution, in order to smoothly transit to isotropic mesh. The first surface scenarios for both HL-
CRM and JSM models were performed for the medium mesh size according to the meshing guidelines
for rows of elements across the trailing edges and mesh resolution. From the medium meshes we
generated the coarse and fine versions by simply scaling up and down respectively the assigned
element length of all sessions of Batch Mesh scenario. Of course when dealing with unstructured
mesh and especially with anisotropic features and Size Boxes, it is not easy to determine a priori the
scale length factor in order to achieve the desired volume cell count. This can only be done for
structured hex meshes, where cell number and edge length scale directly. Therefore, after certain
184
trial and error runs, we ended up with a length scale multiplying factor of 1.2 to 1.25. This resulted
in volume cell count changes of 1.6 to 1.8 between the different levels of refinement.
Using the batch mesh and simply changing the mesh type of all sessions, we also generated a quad
dominant surface mesh for the coarse CRM case. Figure 9.7 displays the two variations of surface
Figure 9.6 Resulting Layers for Isotropic Surface Mesh (Top) and Anisotropic (Bottom)
mesh, tridiagonal dominant and quad dominant. The quad dominant mesh has 30% fewer shell
elements for the same mesh resolution. The only problem that may arise with a quad dominant mesh
is that due to the fact that the near wall layers have extreme aspect ratios, there may be curved areas
of the model where these quads may also have considerable warping. The combination of warping
and high aspect ratio may lead to problems in the solution. At the time of the HLPW-3 the quad
dominant meshes were not prepared, so no simulations were performed for them. Currently ANSA
185
development work for the next version focuses on the integration of anisotropic meshing of leading
and trailing edges inside Batch Mesh tool, thereby eliminating any manual work for the user.
9.1.4 Volume Meshing
Volume meshing is also a part of the Batch Mesh. Two scenarios were created, one for layers and one
Figure 9.7 Close ups of Coarse CRM Gapped Flap Model with Comparison of Tridiagonal Dominant
(Top) vs. Quad Dominant (Bottom) Surface Mesh
for volume meshing. The scenarios were setup once for the CRM and JSM models and then with
simple modifications in their parameters (growth rate, max size etc.) were executed automatically in
order to generate the final volume meshes.
9.1.4.1 Extrusion Layers Generation
The generation of layers is the most demanding part of the meshing process as the there are many
factors that should be considered: very high aspect ratio elements whose quality is difficult to control,
large total boundary layer height, resulting in proximity issues, especially around the areas of the
flaps and slats, where the gaps are small. ANSA layers generation algorithm is very robust and
controllable, with characteristics like:
➢ Generation of initial layers without growth for better refinement of the near wall region.
➢ Generation of initial layers without vector smoothing ensuring high orthogonality near the
wall.
➢ Advanced smoothing algorithm to overcome problems of layers extrusion in concave areas.
➢ Generation of layers with different growth rate, number and first height from different zones
of the model.
➢ Local element squeezing and collapsing at proximities to avoid intersections and bad quality.
➢ Local collapsing when a target aspect ratio is reached, ensuring a nice volume with the tetra
mesh to be connected.
Layers squeezing and collapsing modes work in combination. The user can specify a maximum aspect
ratio that the elements can attain when squeezed in order to overcome proximities. If this limit is
exceeded then ANSA switches to local layer collapsing. Collapsing works for both penta and hex
elements. Depending on the number of nodes that need to be collapsed in a certain area, pyramid and
tetra elements are created out of the original penta and hex elements. No collapsing of course must
take place at the first layers as that would result to very skewed tetras and pyramids. As the layers
grow thicker however, collapsing does not compromise the quality of the resulting elements. In
contrast to the recommended meshing guidelines of the workshop, the first layer height was kept
constant throughout the mesh refinement study. The reason for this is that the initial simulations
that were performed showed that a y+ value of just below 1 was achieved with these values, so there
was no reason to change this parameter, as it was set to its optimum value.
During the mesh refinement study the growth rate of the layers was reduced from 1.25 for the coarse
mesh to 1.1 for the fine mesh. The first two layers were kept with constant height, as prescribed by
the meshing guidelines. The number of layers was increased for the finer meshes in order to maintain
the same total boundary layer height. In addition, the number of layers differed between the main
wing and the fuselage. The reason for this was that the surface mesh length on the wing was smaller
than that of the fuselage. As a result we needed more layers from the fuselage in order to reach a
layer thickness with aspect ratio just below 1 so as to have a nice volume ratio between the last layer
and the first tetra or pyramid to connect on it323. Hex layers growing from quad surface mesh are also
checked in every step for warping. In certain cases, due to local squeezing at proximities, if the top
cap warping exceeds a user specified threshold, ANSA splits the hex into two pentas, so as to avoid
having highly warped quads that would have a negative effect in the pyramid generation for the tetra
meshing. For further details regarding the meshing, consult the324.
9.1.4.2 Tetra Meshing
After layers generation is completed, the third and final Batch Mesh scenario is executed. It consists
of the automatic detection of the boundaries of the external volume, and its meshing with tetra
elements using the TetraRapid volume meshing algorithm. This algorithm is a hybrid advancing
front Delaunay, optimized for speed, smooth size variation and robust surface capturing even in
aerospace applications where map anisotropic high aspect surface mesh is present and size
variations exist from the surface of the model to the far field boundary of up to one hundred thousand
orders of magnitude. Even in such cases, and despite of the fact that it runs on a single thread, it
manages to generate high quality tetra mesh at the speed of one to three million tetras per minute,
depending on the complexity of the domain and the presence of additional refinement Size Boxes.
323 Vangelis Skaperdas, and Neil Ashton, “Development of high-quality hybrid unstructured meshes for the
GMGW-1 workshop using ANSA”, AIAA, January 2018.
324 See Above.
187
The growth rate of the volume mesh was set to 1.15. Figure 9.8 shows the medium JSM volume
mesh.
325N. Ashton, M. Fuchs, C. Mockett, and B. Buda, “EC135 Helicopter Fuselage, ”Go4Hybrid: Grey Area Mitigation
for Hybrid RANS-LES Methods”, C. Mockett, W. Haase, and D. Schwamborn, Springer International Publishing,
2018, pp. 2013–2015.
188
cells, it is likely that a meshes up to a billion cells might be required. The agreement between
OpenFOAM and STAR-CCM+ is within 0.5% for the lift coefficient and less than 2% for the drag
coefficient. They show the same outboard flap separation with the size and position being almost
identical. This less than 2% difference to a popular commercial CFD code would suggest that
OpenFOAM is a competitive tool for engineering analysis, which reflects the recent findings of
[Ashton et al.]326.
Figure 9.9 CL and CD for CRM Geometry at 8 degree AoA using OpenFOAM and STAR-CCM+
9.1.5.2 JSM
Close agreement between OpenFOAM and STAR-CCM+ was observed for the CRM geometry, however
without experimental data it is not possible to assess the accuracy. The JAXA Standard Model (JSM)
high-lift model is similar to the CRM but has a detailed experimental data set making it ideal to assess
the accuracy of OpenFOAM and STAR-CCM+. The flow conditions are a Reynolds number of Re =1.9
x106 and a Mach number of M = 0.172. The viscosity is computed using Sutherlands Law and the
density is based upon the ideal gas law. Simulations are conducted at 4.36, 10.47, 14.54, 18.58, 20.59,
21.57degrees angle of attack and all simulations use the Spalart Allmaras turbulence model. Figure
9.10 shows the lift and drag coefficient throughout the angle of attack range using OpenFOAM and
STAR-CCM+ for the geometry (no nacelle). It can be seen that there is again close agreement between
STAR-CCM+ and OpenFOAM, with only changes becoming clear in the post-stall region. At 4.36
degree, the flow is completely attached in both CFD and the experiment, which is reflected in the
close agreement between CFD and experimental for the lift in the lower angle of attack range. At
18.57 degree, just before stall, the agreement in the total lift is close, however the flow structures
start to exhibit slightly too much stall in the outboard wing section. By 21.57 degree where the flow
is now stalled, the agreement is close but actually for the wrong reason. Whereas the experimental
flow-vis shows both separation at the root and the most outboard region of the wing, the CFD (both
STAR-CCM+ and OpenFOAM show almost no separation at the root and much larger separation at the
outboard of the wing. The total amount of separation is roughly similar which explains why the lift
and to a lesser extent the drag follow the experimental values. Given that all simulations were
undertaken with the Spalart-Allmaras model with no corrections for curvature nor anisotropy of the
flow it is not surprising that the results do not perfectly correlate with the experiment. More details
326N. Ashton, M. Fuchs, C. Mockett, and B. Buda, “EC135 Helicopter Fuselage,” in Go4Hybrid: Grey Area
Mitigation for Hybrid RANS-LES Methods, vol. 134, C. Mockett, W. Haase, and D. Schwamborn, Eds. Cham:
Springer International Publishing, 2018, pp. 2013–2015.
189
results for these cases are shown in [Ashton et al.]327. The results from both the CRM and JSM have
shown that both STAR-CCM+ and OpenFOAM on ANSA© generated grids can perform well for complex
full aircraft geometries and in the case of the JSM, match experimental values up to the stall region.
The next steps are to properly assess the code for transonic flows, which is the typical flow regime
for industrial aerospace simulations.
Figure 9.10 Lift and Drag Coefficients for the JSM Geometry using OpenFOAM and STAR-CCM+
327N. Ashton and V. Skaperdas, “Verification and Validation of OpenFOAM for High-Lift Aircraft Flows,” AIAA
Journal, 2017.
190
have been pursued to automate the process. But first we consider the typical (i.e. V &W) cycle which
is the essence of the multigrid process.
9.2.1 Multigrid Cycle
Usually iterative methods would reduce the high-frequency (i.e., oscillatory) errors but fails to reduce
the low-frequency (i.e., smooth) errors thus will results poor rate of convergence328. High-frequency
and low-frequency are the names of the errors that are referring to the error in the coarser and finer
mesh. In the case of the iterative methods, the solution in the finer grid looks smoother but if we
transfer the solution to the coarser grid it will be oscillating. To solve this problem we need to do
sampling of the solution from finer grid to coarser grid where a few iterations (iterative methods for
example, Gauss-Seidel) need to be done to reduce the high-frequency error. Transferring the
solution from finer grid to coarser grid is called restriction. After the few steps of the restriction
are done, the error of high-frequency will be reduced. Later this solution is transferred to finer grid
for further calculation and by then the low-frequency error is also minimized. Transferring the
solution from coarser grid to finer grid is called the prolongation or interpolation, (see Figure
9.11). On the finer grid it is suggested to do a few more iterations to keep the high-frequency error
still small. So the solution that is in the finer grid will be having less high-frequency and less low-
frequency error. These cycles have to be continued until the solution meets the desired convergence
criteria. Prolongation, restriction and less iteration in the finer grid gives a faster convergence rate
and quicker solution than the normal stationary iterative methods329. An investigation into a Multi-
gridding algorithm by [Gargoloff]330 shows that the maximum value of the residuals for a typical
multigrid solver were reduced much faster than the maximum value of the residuals for the one-level
grid solver. The 2D case was run for a NACA 0012 airfoil. Further discussion regarding smoother
terminology and their usage can be obtained from [Birken et al.]331.
328 Ezhilmathi, “Magic behind the Most of the CFD solvers for HPC “, Scientific Computing blog, 2013.
329 Same as above.
330 A Dissertation by Joaquin Ivan Gargoloff, “A Numerical Method For Fully Nonlinear Aero-elastic Analysis”,
Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements
for the degree of DOCTOR OF PHILOSOPHY, 2007.
331 Philipp Birken, J onathan Bull, and Antony Jameson, “A note on terminology in multigrid methods”, Proc. Appl.
conform to the original geometry either manually or automatically, becomes an unpleasant task. A
usual and tedious approach to unstructured multigrid methods is to generate a sequence of
completely independent coarse and fine meshes and use linear interpolation to transfer
variables back and forth between the various meshes of the sequence, within a multigrid cycle
332-333. The meshes may be generated using any feasible grid generation technique and will generally
332 Mavriplis DJ., “Three-dimensional multigrid for the Euler equations”, AIAA , 1992.
333 Leclercq MP., “Resolution des equations d’Euler par des methods multigrilles conditions aux limites en regime
hypersonique”, PhD thesis. Dep. Appl. Math, Univ. de Saint- Etienne.
334 Lallemand M, Steve H, Dervieux A., “Unstructured multi gridding by volume agglomeration: current status”,
through a graph-based algorithm similar to the vertex removal procedure described above. The flow
solver must be modified to run on arbitrarily shaped control volumes on the coarse levels. For
inviscid-flow control-volume formulations, this presents little difficulty, since the equations are
generally discretized as fluxes over individual control-volume faces, which can be used to build
contour integrals about arbitrarily shaped control volumes. The observed convergence rate of the
agglomeration multigrid method is almost identical to that obtained with the overset-mesh method.
9.2.3 Viscous Flow Consideration
For viscous flows, the discretization of diffusion terms on arbitrarily shaped control volumes is no
longer straightforward. An algebraic interpretation of agglomeration multigrid provides a
mechanism for dealing with equations sets that contain diffusion terms. Borrowing from the
algebraic multigrid literature [Ruge & St¨uben 1987], a coarse-grid operator may be constructed by
projecting the fine-grid operator onto the space spanned by the coarse-grid basis functions. In
general, the multigrid convergence rates achieved for viscous flow cases are substantially slower
than those achieved for inviscid cases. The slower convergence in the viscous flow cases is principally
due to the anisotropic stretching of the mesh in the boundary-layer and wake regions. Unstructured
multigrid methods offer the possibility of designing schemes that are insensitive to anisotropic
effects. The graph-based agglomeration or vertex coarsening algorithms described above can be
modified to merge only those neighboring control volumes or delete only those neighboring vertices
that are the most strongly coupled to the current point. By coarsening or agglomerating only in the
direction of strongest coupling, as determined by the magnitude of the coefficients in the fine-grid
discrete equations, rather than eliminating all possible neighbors, coarse levels that are optimal for
the problem at hand can be generated. An alternate strategy for improving the convergence of
multigrid methods is to employ a stronger smoother as the base grid solver. Any of the implicit
methods described above may be substituted for an explicit method. Of course, most. There is an
obvious duality between the construction of sub regions for implicit methods and coarsening
strategies for multigrid methods that should be exploited in the future to better couple locally implicit
strategies with multigrid methods. The above discussion on multigrid methods centers on the Full
Approximation Storage (FAS) technique, where multigrid is applied directly to the nonlinear form
of the governing equations. Multigrid may also be employed as a solver for the linear system arising
at each time step of an implicit scheme, or even as a preconditioner for an iterative solution strategy.
While these strategies may yield desirable robustness characteristics, they are plagued by the high
memory requirements of the implicit linearization, which are avoided in the FAS approach.
9.2.3.1 Case Study 1 - Development and Application of Agglomerated Multigrid Methods for
Complex Geometries
Authors : Hiroaki Nishikawa, Boris Diskin and James L. Thomas
Appearance : 40th Fluid Dynamics Conference and Exhibit, 28 June - 1 July 2010, Chicago, Illinois
Source : AIAA 2010-4731. http://arc.aiaa.org | DOI: 10.2514/6.2010-4731
[Nishikawa et al.]335 reports on progress been made on the development of agglomerated multigrid
techniques for fully un-structured grids in three dimensions. Building upon two previous studies,
focused on efficiently solving a model diffusion equation. It demonstrates a robust fully-coarsened
agglomerated multigrid technique for 3D complex geometries, incorporating the following key
developments: consistent and stable coarse-grid discretization, a hierarchical agglomeration scheme,
and line-agglomeration/relaxation using prismatic-cell discretization’s in the highly-stretched grid
regions. A significant speed-up in computer time over state-of-art single-grid computations is
335Hiroaki Nishikawa, Boris Diskin and James L. Thomas, “Development and Application of Agglomerated
Multigrid Methods for Complex Geometries”, 40th Fluid Dynamics Conference and Exhibit 28 June - 1 July 2010,
Chicago, Illinois, AIAA 2010-4731.
193
demonstrated for a model diffusion problem, the Euler equations, and the Reynolds-averaged Navier-
Stokes equations for 3D realistic complex geometries.
9.2.3.1.1 Introduction
Multigrid techniques [1] are used to accelerate convergence of current Reynolds-Averaged Navier-
Stokes (RANS) solvers for both steady and unsteady flow solutions, particularly for structured-grid
applications. [Mavriplis et al. ][2, 3, 4, 5] pioneered agglomerated multigrid methods for large-scale
unstructured-grid applications. During the present development, a serious convergence degradation
in some of the state-of-the-art multi-grid algorithms was observed on highly-refined grids. To
investigate and overcome the difficulty, we critically studied agglomerated multigrid techniques [6,
7] for two- and three-dimensional isotropic and highly-stretched grids and developed quantitative
analysis methods and computational techniques to achieve grid-independent convergence for a
model equation representing laminar diffusion in the incompressible limit. It was found in Ref. [6]
that it is essential for grid-independent convergence to use consistent coarse-grid discretization. In
the later Ref. [7], it was found that the use of prismatic cells and line-agglomeration/relaxation is
essential for grid-independent convergence on fully-coarsened highly-stretched grids.
Here, we extend and demonstrate these techniques for inviscid and viscous flows over complex
geometries. The paper is organized as follows. Finite-volume discretization employed for target grids
are described. Details of the hierarchical agglomeration scheme are described. Elements of the
multigrid algorithm are then described, including discretization on coarse grids. Multigrid results for
complex geometries are shown for a model diffusion equation, the Euler equations, and the RANS
equations. The final section contains conclusions and recommendations for future work.
9.2.3.1.2 Discretization
The discretization method is a finite-volume discretization (FVD) centered at nodes. It is based on
the integral form of governing equations of interest:
̂) dΓ = ∬ s dΩ
∮(𝐅. 𝐧
Γ Ω
Eq. 9.1
where F is a flux tensor, s is a source term, Ω is a control
volume with boundary Г, and n^ is the outward unit
normal vector. For the model diffusion (Laplace)
equation, the boundary conditions are taken as
Dirichlet, i.e., specified from a known exact solution
over the computational boundary. Tests are performed
for a constant manufactured solution, U(x; y; z) = 10:0,
Figure 9.13 Illustration of a node-
with a randomly perturbed initial solution. For inviscid
centered median-dual control volume
flow problems, the governing equations are the Euler (shaded). Dual faces connect edge
equations. Boundary conditions are a slip-wall midpoints with primal cell centroids.
condition and inflow/outflow conditions on open Numbers 0-4 denote grid nodes
boundaries. For viscous flow problems, the governing
equations are the RANS equations with the Spalart-
Allmaras one-equation model [8]. Boundary conditions are non-slip condition on walls and
inflow/outflow conditions on open boundaries. The source term, s, is zero except for the turbulence-
model equation (see Ref. [8]).
The general FVD approach requires partitioning the domain into a set of non-overlapping control
volumes and numerically implementing Eq. 9.1 over each control volume. Node-centered schemes
define solution values at the mesh nodes. In 3D, the primal cells are tetrahedra, prisms, hexahedra,
or pyramids. The median-dual partition [9, 10] used to generate control volumes is illustrated in
194
Figure 9.13 for 2D. These non-overlapping control volumes cover the entire computational domain
and compose a mesh that is dual to the primal mesh.
The main target discretization of interest for the model diffusion equation and the viscous terms of
the RANS equations is obtained by the Green-Gauss scheme [11, 12], which is a widely-used viscous
discretization for node-centered schemes and is equivalent to a Galerkin finite-element discretization
for tetrahedral grids. For mixed-element cells, edge-based contributions are used to increase the h-
ellipticity of the operator [11, 12]. The inviscid terms are discretized by a standard edge-based
method with unweighted least-squares gradient reconstruction and Roe's approximate Riemann
solver [13]. Limiters are not used for the problems considered in this paper. The convection terms of
the turbulence equation are discretized with first-order accuracy.
9.2.3.1.3 Agglomeration Scheme
As described in the previous papers [6,7], the grids are agglomerated within a topology-preserving
framework, in which hierarchies are assigned based on connections to the computational
boundaries. Corners are identified as grid points with three or more boundary-condition-type
closures (or three or more boundary slope discontinuities). Ridges are identified as grid points with
two boundary-condition-type closures (or two boundary slope discontinuities). Valleys are
never agglomerated, ridges are agglomerated only with ridges, and valleys are agglomerated only
with valleys. A typical boundary agglomeration generated by the above rules is shown in Figure
9.14.
The conditional entries denote that further inspection of the connectivity of the topology must be
considered before agglomeration is allowed. For example, a ridge can be agglomerated into an
existing ridge agglomeration if the two boundary conditions associated with each ridge are the same.
For valleys or interiors, all available neighbors are collected and then agglomerated one by one in the
order of larger number of edge-connections to a current agglomeration until the maximum threshold
of agglomerated nodes (4 for valleys; 8 for interiors) is reached. The prolongation operator P 1 is
modified to prolong only from hierarchies equal or above the hierarchy of the prolonged point.
Hierarchies on each agglomerated grid are inherited from the finer grid.
For the results reported in this paper, we employ agglomeration scheme II described in previous
papers [6 , 7]. It has been modified to deal with viscous meshes using implicit-line agglomeration. It
performs the agglomeration in the following sequence:
Figure 9.16 Grids and convergence of the model diffusion equation for the F6 wing-body
combination
interiors, agglomerations containing only a few volumes (typically one) are combined with other
197
agglomerations. Figure 9.16 and Figure 9.17 show primal grids and agglomerations for the F6
Figure 9.17 Grids and Convergence of the Model Diffusion Equation for the DPW-W2 case
198
wing-body combination and the DPW-W2 [14] grids. These grids are viscous grids; the primal grid
has prismatic viscous layers around the body and the wing. Coarsening ratios are indicated by r k (k
Figure 9.18 Grids and Convergence for the wing-ap inviscid case.
199
= 1 ; 2 ; 3 ; 4) in the parenthesis. Line agglomeration was applied in these regions. Figure 9.18 show
primal grids and agglomerations for a wing-body combination, a wing-ap combination, and a 3D wing
with a blunt trailing edge, all are pure-tetrahedral inviscid grids.
9.2.3.1.4 Single-Grid Iterations
Single-grid iteration scheme is based on the implicit formulation:
Ω ∂𝐑̂∗
( + ̂ (𝐔)
) δ𝐔 = −𝐑
∆τ ∂𝐔
Eq. 9.2
where ^R (U) is the target residual computed for the current solution U, Δτ is a pseudo-time step,
∂Ř*/∂U is an exact/approximate Jacobian, and δU is the change to be applied to the solution U. An
approximate solution to Eq. 9.2 is computed by a certain number of iterations on the linear system
(linear-sweeps). Update of U completes one nonlinear iteration. The RANS equations are iterated in
a loosely-coupled formulation, updating the turbulence variables after the mean-ow variables at each
nonlinear iteration. The left-hand-side operator of Eq. 9.2 includes an exact linearization of the
viscous (diffusion) terms and a linearization of the inviscid terms involving first-order contributions
only. Thus, the iterations represent a variant of defect correction.
Typically in single-grid FUN3D RANS applications, the first-order Jacobian corresponds to the
linearization of Van Leer's flux-vector splitting. For inviscid cases, we consider using the linearization
of Roe's approximate Riemann solver. Jacobians are updated after each iteration. The linear sweeps
performed before each nonlinear update include νp sweeps of the point multi-color Gauss-Seidel
relaxation performed through the entire domain followed by νl line-implicit sweeps in stretched
regions. The line-implicit sweeps are applied only when solving the model diffusion or the RANS
equations. In a line-implicit sweep, unknowns associated with each line are swept simultaneously by
inverting a block tridiagonal matrix [7]. For RANS simulations, νp = νl = 15 for the mean-flow
equations and νp = νl = 10 for the turbulence equation. For the model diffusion equation, only one
linear sweep is performed per nonlinear iteration, i.e., νp = νl = 1, and the exact Jacobian computed
only once at the beginning of the entire calculation. In spite of linearity of the model diffusion
equation, computations of Ř(U) in Eq. 9.2 do not employ the exact Jacobian, thus, providing a better
similarity to nonlinear computations.
9.2.3.1.5 Multigrid V-Cycle
The multigrid method is based on the full-approximation scheme (FAS) [1, 15] where a coarse-grid
problem is solved/relaxed for the solution approximation. A correction, computed as the difference
between the restricted fine-grid solution and the coarse-grid solution, is prolonged to the finer grid
to update the fine-grid solution. The two-grid FAS is applied recursively through increasingly coarser
grids to define a V-cycle. A V-cycle, denoted as V (ν1 ; ν2), uses ν1 relaxations performed at each grid
before proceeding to the coarser grid and ν2 relaxations after coarse-grid correction. On the coarsest
grid, relaxations are performed to bring two orders of magnitude residual reduction or until the
maximum number of relaxations, 10, is reached.
9.2.3.1.6 Inter-Grid Operators
The control volumes of each agglomerated grid are found by summing control volumes of a finer grid.
An operator that performs the summation is given by a conservative agglomeration operator, R0,
which acts on _ne-grid control volumes and maps them onto the corresponding coarse-grid control-
volumes. Any agglomerated grid can be defined, therefore, in terms of R0 as
Ωc = R 0 Ωf
Eq. 9.3
200
where superscripts c and f denote entities on coarser and finer grids, respectively. On the
agglomerated grids, the control volumes become geometrically more complex than their primal
counterparts and the details of the control-volume boundaries are not retained. The directed area of
a coarse-grid face separating two agglomerated control volumes, if required, is found by lumping the
directed areas of the corresponding finer-grid faces and is assigned to the virtual edge connecting the
centers of the agglomerated control volumes. Residuals on the fine grid, ^Rf , corresponding to the
integral Eq. 9.1 are restricted to the coarse grid by the conservative agglomeration operator, R0, as
̂ ̂f
𝐑c = R 0 𝐑
Eq. 9.4
where ^Rc denotes the fine-grid residual restricted to the coarse grid. The fine-grid solution
approximation, Uf , is restricted as
𝐑 0 (𝐔f Ωf )
𝐔0c =
Ωc
Eq. 9.5
where Uc 0 denotes the fine-grid solution approximation restricted to the coarse grid. The restricted
approximation is then used to define the forcing term to the coarse-grid problem as well as to
compute the correction, (δU)c:
(δ𝐔)c = 𝐔 c − 𝐔0c
Eq. 9.6
where Uc is an updated coarse-grid solution obtained directly from the coarse-grid problem. The
correction to the finer grid is prolonged typically through the prolongation operator, P1, that is exact
for linear functions, as
(δ𝐔)f = P1 (δ𝐔)c
Eq. 9.7
The operator P1 is constructed locally using linear interpolation from a tetrahedra defined on the
coarse grid. The geometrical shape is anchored at the coarser-grid location of the agglomerate that
contains the given finer control volume. Other nearby points are found by the adjacency graph. An
enclosing simplex is sought that avoids prolongation with non-convex weights and, in situations
where multiple geometrical shapes are found, the first one encountered is used. Where no enclosing
simplex is found, the simplex with minimal non-convex weights is used.
9.2.3.1.7 Coarse-Grid Discretization
For inviscid coarse-grid discretization, a first-order edge-based scheme is employed. For the model
equation and the viscous term in the RANS equations, two classes of coarse-grid discretization were
previously studied [6, 7]: the Average-Least-Squares (Avg-LSQ) and the edge-terms-only (ETO)
schemes. The consistent Avg-LSQ schemes are constructed in two steps: first, LSQ gradients are
computed at the control volumes; then, the average of the control-volume LSQ gradients is used to
approximate a gradient at the face, which is augmented with the edge-based directional contribution
to determine the gradient used in the ux. There are two variants Inviscid Viscous (Diffusion) Primal
grid Second-order edge-based reconstruction Green-Gauss Coarse grids First-order edge-based
reconstruction Face-Tangent Avg-LSQ.
Coarse grids Exact or Approximate (edge-terms only) of the Avg-LSQ scheme. One uses the average-
least-squares gradients in the direction normal to the edge (edge-normal gradient construction). The
other uses the average-least-squares gradients along the face (face-tangent gradient construction).
The ETO discretization are obtained from the Avg-LSQ schemes by taking the limit of zero Avg-LSQ
gradients. The ETO schemes are often cited as a thin-layer discretization in the literature [2, 3, 4];
they are positive schemes but are not consistent (i.e., the discrete solutions do not converge to the
exact continuous solution with consistent grid refinement) unless the grid is orthogonal [13, 16]. As
201
shown in the previous papers [6, 7], ETO schemes lead to deterioration of the multigrid convergence
for refined grids, and therefore are not considered in this paper. For practical applications, the face-
tangent Avg-LSQ scheme was found to be more robust than the edge-normal Avg-LSQ scheme. It
provides superior diagonal dominance in the resulting discretization [6, 7]. In this study, therefore,
we employ the face-tangent Avg-LSQ scheme as a coarse-grid discretization for the model equation
and the viscous term.
For excessively-skewed faces (over 90◦ angle between the outward face normal and the
corresponding outward edge vector), which can arise on agglomerated grids, the gradient is
computed by the Avg-LSQ scheme and edge contributions are ignored. The Galerkin coarse-grid
operator [1], which was considered in a previous study, is not considered here since the method was
found to be grid-dependent and slowed down the multigrid convergence for refined grids [6]. For
inviscid discretization, we employ a first-order edge-based discretization on coarse grids. Table 9.3
shows a summary of discretization used.
9.2.3.1.8 Relaxations
Relaxation scheme is similar to the single-grid iteration described in Sec. 9.2.3.1.4 with the
following important differences. On coarse grids, the Avg-LSQ scheme used for viscous terms has a
larger stencil than the Green-Gauss scheme implemented on the target grid and its exact linearization
has not been used; instead relaxation of the Avg-LSQ scheme relies on an approximate linearization,
which consists of edge terms only. For inviscid cases, the first-order Jacobian is constructed based on
Roe's approximate Riemann solver, and thus it is exact on coarse grids where the first-order scheme
is used for the residual. For RANS cases, the first-order Jacobian is constructed based on Van Leer's
flux-vector splitting, but the inviscid part of the residual is computed by Roe's approximate Riemann
solver. Therefore, the Jacobian is approximate on both the primal and coarse grids. Table 9.5
summarizes the Jacobians used for inviscid and viscous (diffusion) terms on the primal and coarse
grids. In multigrid nonlinear applications, Jacobians are evaluated at the beginning of a cycle and
frozen during the cycle. For inviscid and RANS flow simulations, significantly fewer linear sweeps are
used in a multigrid relaxation than in a single-grid iteration: νp = νl = 5 for both the mean ow and
turbulence relaxations. For the model diffusion equation, still only one sweep is performed per
relaxation.
9.2.3.1.9 Cost of Multigrid V-Cycle
All of the computations in the paper use FAS multigrid. For the linear model diffusion equation, the
computer time would be reduced if the corresponding correction scheme (CS) cycle is used. To
estimate relative cost of multigrid cycles in comparison with single-grid iterations, the cost of
nonlinear residual valuations, relaxation updates, and Jacobian evaluations needs to be taken into
account. Suppose that a nonlinear relaxation and a Jacobian evaluation cost σ and J times a nonlinear
residual evaluation, respectively. Then, the cost of a single-grid iteration relative to the cost of a
nonlinear residual evaluation is given by
W SG = σSG + J
Eq. 9.8
202
where the superscript SG denotes single-grid iterations. On the other hand, a multigrid cycle involves
ν1 + ν2 nonlinear relaxations, a nonlinear residual evaluation before restriction, and a Jacobian
evaluation per cycle per grid. A residual evaluation on coarse grids is also required to form the FAS
forcing term. The cost of a multigrid cycle, MG, relative to the cost of a fine-grid nonlinear residual
evaluation is given by
W MG = [C(ν1 + ν2 )σMG + J + 1] + C − 1
Eq. 9.9
where C is a coarse-grid factor,
1 1 1
C= + + +⋯
r1 r1 r2 r1 r2 r3
Eq. 9.10
Here, rk is the agglomeration ratio of the k-th agglomerated grid. The relative cost, WMGSG , of a V -cycle
is therefore given by
MG
W MG
WSG = SG
W
Eq. 9.11
Table 9.4 shows values
of WMGSG , σ and J for each
equation set within the
single-grid iteration and Table 9.4 Summary of costs and typical numbers of linear-sweeps. The
the multigrid method. multigrid cycle is a 5-level V (2; 1) with a typical coarsening ratio 8. The
The values for σ and J are numbers in parenthesis denote the number of point and line sweeps,
based on measured respectively, and the second set for RANS denotes the number of point and
line sweeps of the turbulent equation.
computer times
associated with residual
evaluation, Jacobian evaluation, and linear-sweeps on the primal grid for particular configurations.
The corresponding values on a per node basis vary from the tabulated values on the coarser grids
and across configurations. Thus, Eq. 9.11 serves as a reasonable approximation to the expected code
performance. Note that the Jacobian computation has been ignored for the model diffusion equation.
This is because the Jacobian is constant for the linear problem and therefore it is computed only once
and never updated. Observe also that σ is much smaller in the multigrid cycle than in the single-grid
iteration for the nonlinear cases. This saving comes from much fewer linear-sweeps in the multigrid
method. We experimentally found that the multigrid convergence did not depend heavily on the
number of linear-sweeps. Increasing them further does not reduce the number of cycles for
convergence, but it merely increases the CPU time. The numbers of linear-sweeps shown for the
single-grid method are typical numbers considered sufficient for robust computations with a
reasonably large CFL number. The relative cost (Eq. 9.11) computed based on the measured σ and J
are shown in the third column of the table. Considering a 5-level V (2 ; 1) cycle with a coarsening
ration of 8, the relative cost is found to be 4 : 5 for the diffusion equation, 1: 8 for the inviscid equation,
and 1 : 5 for the RANS equations.
9.2.3.1.10 Results for Complex Geometries
All calculations presented in this paper were performed with a single processor. Parallelization of
the multigrid algorithm is currently underway. The multigrid cycle is a 5-level V (2 ; 1) for all cases.
9.2.3.1.11 Model Diffusion Equation
The multigrid method was applied on grids generated for two practical geometries: the F6 wing-body
and the DPW-W2 wing-alone cases [14]. Both grids are tetrahedral, but prisms are used in a highly-
stretched viscous layer near the solid boundary. Pyramidal cells are also present around the
transitional region. The multigrid V (2; 1) cycle is applied and compared with single-grid iterations.
Table 9.7 Cost of V-cycle relative to a single-grid iteration and speed-up factor. The expected speed-
up factors have been computed with the actual coarsening ratio
Table 9.6 Summary of grid sizes and parameters for the inviscid cases
The CFL number is set to infinity. For the F6 wing-body grid (1,121,301 nodes), the grids and
convergence results are shown in Figure 9.16. The speed-up factor is 63 in CPU time. A similar result
was obtained for the DPW-W2 grid (1,893,661 nodes) as shown in Figure 9.17. The speed-up factor
is nearly 22 in this case. The cost of one V -cycle computed according to Eq. 9.11 with actual
coarsening ratios is shown for each case in the fourth column of Table 9.7. It shows that one V-cycle
costs nearly 4 single-grid iterations. The fifth column is an expected speed-up factor based on the
number of single-grid iterations (NSG), the number of multigrid cycles (NMG), and the factor WMGSG
MG
N𝑆𝐺 ⁄NMG WSG
Eq. 9.12
The last column is the actual speed-up factor computed as a ratio of the total single-grid CPU time to
the total multigrid CPU time. A fairly good agreement can be observed between the expected and
204
Figure 9.19 Residual versus CPU time for the F6 wing-body case (RANS)
it is currently under investigation. The CFL number is not ramped in this case, but set to 200 for the
mean-flow equations and 30 for the turbulence equation. Convergence results are shown in Figure
9.19. As can be seen, the multigrid achieved four orders of reduction in the residual 5 times faster
in CPU time than the single-grid iteration. For this case, neither the multigrid nor single-grid method
fully converges seemingly due to a separation near the wing-body junction. Four orders of magnitude
reduction is just about how far a single-grid is run in practice for this particular configuration. The
comparison of the number of cycles with the number of single-grid iterations in the figure implies
that the CPU time for a multigrid V (2 ; 1) cycle is less than the CPU time for two single-grid iterations.
As shown in Table 9.7, one multigrid V-cycle actually costs 1 : 6 single-grid iterations, indicating a
good agreement between the expected and actual speed-up factors.
9.2.3.1.14 Concluding Remarks
An agglomerated multigrid algorithm has been applied to inviscid and viscous flows over complex
geometries. A robust fully-coarsened hierarchical agglomeration scheme was described for highly-
stretched viscous grids, incorporating consistent viscous discretization on coarse grids. Results for
practical simulations show that impressive speed-ups can be achieved for realistic flows over
complex geometries. Parallelization of the developed multigrid algorithm is currently underway to
expand the applicability of the developed technique to larger-scale computations and to demonstrate
grid-independent convergence of the developed multigrid algorithm.
9.2.3.1.15 References
[1] Trottenberg, U., Oosterlee, C. W., and Schuller, A., Multigrid, Academic Press, 2001.
[2] Mavriplis, D. J., Multigrid Techniques for Unstructured Meshes," VKI Lecture Series VKI-LS 1995-
02, Von Karman Institute for Fluid Dynamics, Rhode-Saint-Genese, Belgium, 1995.
[3] Mavriplis, D. J., Unstructured Grid Techniques," Annual Review of Fluid Mechanics, Vol. 29, 1997,
pp. 473-514.
[4] Mavriplis, D. J. and Pirzadeh, S., Large-Scale Parallel Unstructured Mesh Computations for 3D High-
Lift Analysis," Journal of Aircraft, Vol. 36, No. 6, 1999, pp. 987{998.
[5] Mavriplis, D. J., An Assessment of Linear versus Non-Linear Multigrid Methods for Unstructured
Mesh Solvers," Journal of Computational Physics, Vol. 275, 2002, pp. 302-325.
[6] Nishikawa, H., Diskin, B., and Thomas, J. L., Critical Study of Agglomerated Multigrid Methods for
Diffusion," AIAA Journal, Vol. 48, No. 4, 2010, pp. 839-847.
[7] Thomas, J. L., Diskin, B., and Nishikawa, H., A Critical Study of Agglomerated Multigrid Methods for
Diffusion on Highly-Stretched Grids," Computers and Fluids, 2010, to appear.
[8] Spalart, P. R. and Allmaras, S. R., A One-Equation Turbulence Model for Aerodynamic Flows," AIAA
paper 92-0439, 1992.
[9] Barth, T. J., Numerical Aspects of Computing Viscous High Reynolds Number Flows on Unstructured
Meshes," AIAA Paper 91-0721, 1991.
[10] Haselbacher, A. C., A Grid-Transparent Numerical Method for Compressible Viscous Flow on Mixed
Unstructured Meshes, Ph.D. thesis, Loughborough Universit, 1999.
[11] Anderson, W. K. and Bonhaus, D. L., An implicit upwind algorithm for computing turbulent flows
on unstructured grids," Computers and Fluids, Vol. 23, 1994, pp. 1-21.
[12] Diskin, B., Thomas, J. L., Nielsen, E. J., Nishikawa, H., and White, J. A., Comparison of Node-Centered
and Cell-Centered Unstructured Finite-Volume Discretization. Part I: Viscous Fluxes," 47th AIAA
Aerospace Sciences Meeting, AIAA Paper 2009-597, January 2009.
[13] Diskin, B. and Thomas, J. L., Accuracy Analysis for Mixed-Element Finite-Volume Discretization
Schemes," NIA Report No. 2007-08 , 2007.
[14] “The DPW-II Workshop for the geometry," http://aaac.larc.nasa.gov/tsab/cfdlarc/aiaa-
dpw/Workshop2/workshop2.
[15] Briggs, W. L., Henson, V. E., and McCormick, S. F., A Multigrid Tutorial, SIAM, 2nd ed., 2000.
206
[16] Thomas, J. L., Diskin, B., and Rumsey, C. L., Towards Verification of Unstructured Grid Methods,"
AIAA Journal, Vol. 46, No. 12, December 2008, pp. 3070-3079.
9.2.4 Case Study 2 - A 3D Hybrid Grid Generation Technique and a Multigrid/Parallel Algorithm
Based on Anisotropic Agglomeration Approach336
A hybrid grid generation technique and a multigrid/parallel algorithm are presented by [Laiping et
al.]337, for turbulence flow simulations over the (3D) complex geometries. The hybrid grid generation
technique is based on an agglomeration method of anisotropic tetrahedrons. Firstly, the complex
computational domain is covered by pure tetrahedral grids, in which anisotropic tetrahedrons are
adopted to discrete the boundary layer and isotropic tetrahedrons in the outer field. Then, the
anisotropic tetrahedrons in the boundary layer are agglomerated to generate prismatic grids. The
agglomeration method can improve the grid quality in boundary layer and reduce the grid
quantity to enhance the numerical accuracy and efficiency. In order to accelerate the convergence
history, a multigrid/parallel algorithm is developed also based on anisotropic agglomeration
approach.
9.2.4.1 Introduction, Background and Contributors
Many grid generation techniques, such as multi-block or patched structured grids338-339 overlapping
or chimera grids340 and unstructured grids341 have been proposed in the last decades. More recently,
mixed or hybrid grids including many different cell types have gained popularity, because they
integrate the advantages of both structured and unstructured meshes to improve efficiency and
accuracy. For example, hybrid (prism/tetrahedral) grids342-343, mixed grids including
(tetrahedral/prism/pyramid/hexahedral) cells344, adaptive cartesian grid methods345-346, and
(cartesian/tetrahedral/prismatic) grids347 have been used in many applications. It is relatively easier
to use unstructured grids over complex configurations, even for viscous flow simulations, where the
anisotropic tetrahedrons are used in boundary layer. Generally, the anisotropic tetrahedrons can be
automatically generated by an advancing front method348. However, the enormous total grid number
will reduce the efficiency of the viscous flow simulations over complex geometries. More
336 Zhang Laiping, Zhao Zhong, Chang Xinghua, He Xin, “A 3D hybrid grid generation technique and a
multigrid/parallel algorithm based on anisotropic agglomeration approach”, Chinese J of Aeronautics, 2013.
337 See Previous.
338 Jochem H, Peter E, Yang X, Cheng ZM. Parallel multiblock structured grids. Thompson JF, Soni BK, Weatherill
NP, editors. Handbook of grid generation. CRC Press; 1999 chapter 12.
339 Sebastien E. Numerical simulation and drag extraction using patched grid calculations. AIAA Paper 2003.
340 Benek A, Buning PG, Steger JL. A 3-D Chimera grid embedding technique. AIAA Paper 1985-1523; 1985.
341 Weatherill NP. Unstructured grids: procedures and applications. In: Thompson JF, Soni BK, Weatherill NP,
AIAA J, 1996.
343 Pirzadeh S. Three-dimensional unstructured viscous grids by the advancing-layers method. AIAA J 1996.
344 Coirier WJ, Jorgenson PCE. A mixed volume grid approach for the Euler and Navier–Stokes equations. AIAA
1996;34(5):938–45.
346 Wang ZJ, Chen RF. Anisotropic solution-adaptive viscous Cartesian grid method for turbulent flow simulation.
AIAA J 2002;40(10):1969–78.
347 Zhang LP, Yang YJ, Zhang HX. Numerical simulations of 3D inviscid/viscous flow fields on
Cartesian/unstructured/prismatic hybrid grids. Proceedings of the fourth Asian CFD conference, Mianyang,
Sichuan, China; September 2000.
348 Lohner R, Parikh P. Generation of three-dimensional unstructured grids by the advancing front method. Int
importantly, the forfeiture of orthogonality will influence the simulation accuracy of boundary
layer. Therefore, the prism grids, even unstructured hexahedral grids, may be a better choice in
the boundary layer. The traditional prism grid generation method is the advancing layer method in
which the prism grids are generated layer-by-layer in the normal direction from the surface
triangular grids on the solid wall. Alternatively, the idea of solving the hyperbolic equations to
generate structured grids has been introduced to generate prism grids349-350. However, for some real-
world configurations, these methods will fail in the concave and/or convex regions, because the
marching vector may be invisible from some of the nodes in its node-manifold351. Examples include
the trailing edge of an airfoil, the tip of a sharp nose, the wing-body conjunction, the tail of a store
and the nacelles of aircraft. So it is still difficult to automatically generate viscous grids in the
boundary layer. Since the anisotropic tetrahedrons can be generated fully automatically, we can
agglomerate them into prisms in the boundary layer and then improve the grid quality of the pure
anisotropic tetrahedron grids. That is the basic idea of present work.
On the other hand, the computation efficiency is another key issue for turbulence flow simulations
over complex configurations, because the total grid number may be several ten M, even up to
hundreds of M, for a real-life aircraft. The high aspect ratio grids in boundary layer will bring about
very strong stiffness during time-iteration, resulting in lower converging efficiency. The multigrid
algorithm is an effective method to improve the efficiency. After [Fedorenko]’s development of the
method in the 1960s352, it was discovered, further developed and popularized by [Brandt] in the
1970s353. Multigrid was applied to the transonic small-disturbance equation by [South and
Brandt]354 and to the full potential equation by [Jameson]355. Subsequently, the idea of agglomeration
multigrid has been extended to unstructured grids [Smith]356, [Lallemand et al.]357, [Venkatakrishnan
& Mavriplis]358, and also [Mavriplis]359-360. Despite considerable progress towards improving the
convergence performance of multigrid algorithm based on cell-vertex finite volume schemes, the
performance of these methods for viscous flow simulations is not satisfying for cell-centered finite
volume schemes. The key issue is how to generate high-quality coarser grids using the
agglomeration approach. In other words, how to ensure the ‘‘convex’’ property for the coarser
grids, especially in the boundary layer.
The work of 361-362 gave some inspirations. They agglomerate the grids in boundary layer with a
349 Chan WM, Steger JL. Enhancements of a three-dimensional hyperbolic grid generation scheme. Appl Math
Computing 1992.
350 Matsuno K. High-order upwind method for hyperbolic grid generation. Computational Fluids 1999.
351 Kannan R, Wang ZJ. Overset adaptive Cartesian/prism grid method for stationary and moving-boundary flow
Platzer MF, editors. Transonic flow problems in turbomachinery. Washington: Hemisphere; 1977.
355 Jameson A. Acceleration of transonic potential flow calculations on arbitrary meshes by the multiple grid
method. Proceedings of the AIAA fourth computational fluid dynamics conference. Virginia, 1979.
356 Smith WA, Multigrid solutions of transonic flow on unstructured grids. In: Baysal O, editor. Recent advances
and applications in computational fluid dynamics. In: Proceedings of the ASME winter annual meeting, 1990.
357 Lallemand M, Steve H, Dervieux A. Unstructured multi gridding by volume agglomeration: current status. Com.
Fluids 1992.
358 Venkatakrishnan V, Mavriplis D. Agglomeration multigrid for the three-dimensional Euler equations. 1995.
359 Mavriplis DJ. Unstructured grid techniques. Annual Rev Fluid Mech 1997;29(1):473–514.
360 Mavriplis DJ. Viscous flow analysis using a parallel unstructured multigrid solver. AIAA J 2000..
361 Daniel G, Sreenivas K. Parallel FAS multigrid for arbitrary Mach number, high Reynolds number unstructured
normal-direction restriction. This idea can be extended to improve the coarser grid quality in
boundary layer. In this paper, a hybrid grid generation technique is presented for turbulence flow
simulations over 3D complex configurations, which is based on an anisotropic agglomeration of pure
tetrahedral grids. Firstly, pure unstructured grids are generated over a given complex geometry, and
anisotropic tetrahedral elements with high aspect ratio are adopted in the boundary layer. Then, the
anisotropic tetrahedrons are agglomerated to generate the prismatic grids in the boundary layer,
while the isotropic tetrahedrons in the outer flow field keep alone. To validate the method, the hybrid
grids over some complex geometries are generated, including the DLR-F6 wing-body configuration,
a fighter and a human body, which demonstrate the robustness of the present hybrid grid generation
technique.
Furthermore, a multigrid computing algorithm based on semi-structured agglomeration method is
developed to improve the convergence performance and couple with the parallel computing based
on computational domain decomposition. The semi-structured agglomeration means that the
agglomeration is mainly limited to the normal direction of the solid wall to keep the orthogonality of
hybrid grids in the boundary layer. This multigrid computing algorithm matches the present hybrid
grid generation technique, because both of them are based on the anisotropic agglomeration
approach. Some typical cases are tested to validate the robustness and efficiency of the present
multigrid computing method for viscous flow simulations over complex geometries. The numerical
results are compared with the experimental data and other numerical results, which demonstrate
the efficiency and accuracy of the present method.
9.2.4.2 Hybrid Grid Generation Technique Based on Anisotropic Agglomeration Approach
As mentioned in the introduction, despite considerable progress towards facilitating the grid
generation process itself, the high-quality grid generation over 3D complex real-world
configurations, especially for turbulence flow simulations, is still an open issue for producing
accurate CFD solutions and, thus, require further attention. Fortunately, the unstructured grid
generation method is currently at a stage of maturity that allows discretization of complex, 3D, real-
world configurations with relative ease and a reasonable amount of time and effort. Generally, the
pure unstructured grids mean triangles in 2D and tetrahedrons in 3D. Thanks to many advances by
a number of researchers in the science/art of grid generation, this crucial step no longer represents
an obstacle for the routine use of CFD in the context of large-scale (industrial) applications. Some
pieces of commercial grid generation software are available in the market, such as Gridgen, ICEM-
CFD, etc. Also, there are some in-house grid generation software, such as VGrid in NASA and Centaur
in Europe. The unstructured grids can be generated by the advancing front method363-364, Delaunay
method365 and/or the modified Quadtree/Octree methods366. Actually, in the commercial grid
generation software, the integrated strategy is adopted to improve the grid quality and the grid
generation efficiency. For viscous flow simulations, the anisotropic tetrahedrons are generally
adopted in the boundary layer. However, the enormous total grid number will reduce the efficiency.
More importantly, the forfeiture of orthogonality will influence the simulation accuracy of
boundary layer. A possible better choice is to generate prisms in the boundary layer, which is the
main advantage of the so-called hybrid grids. However, the prism grid generation is still not a routine
task due to the geometric complexity. Since the pure anisotropic tetrahedrons can be generated
automatically and easily using the available commercial grid generation software, is it possible to
363 Jochem H, Peter E, Yang X, Cheng ZM. Parallel multiblock structured grids. Thompson JF, Soni BK, Weatherill
NP, editors. Handbook of grid generation. CRC Press; 1999 chapter 12.
364 Sebastien E. Numerical simulation and drag extraction using patched grid calculations. AIAA Paper, 2003.
365 Waston DF. Computing the n-dimensional Delaunay tessellation with application to voronoi polytopest. Com.
J 1981;24(2):167–72.
366 Merry MA, Shephard MS. Automatic three-dimensional mesh generation by the modified-Octree technique. Int
Step 1
Step 2
Step 3
Step 4
The purpose of the Step 2 and Step 3 is to agglomerate three anisotropic tetrahedrons into a prism.
But for the real world configurations, some isolated anisotropic tetrahedrons may exist in the
concave and/or convex regions. If we allow these cells to exist, the ratio of the volume of two
210
neighboring cells may be 1:3, so the smoothness of grids in boundary layer is not satisfied. Hence, the
agglomeration of Step 4 combines the isolated tetrahedrons into the neighboring prisms to improve
the smoothness of grids in boundary layer (The volume ratio of two neighboring cells is about 3:4).
In practice, only a small number of isolated anisotropic tetrahedrons are found. See (Figure 9.20).
9.2.4.3.2 Interface Agglomeration
After the volume agglomeration, the interface agglomeration is carried out to reduce the number of
interfaces between two neighboring prisms. The two triangles shared by two neighboring prisms are
agglomerated into a quadrilateral (see Figure 9.20-Step 4). The above hybrid grid generation
technique has some distinguished properties:
➢ Once the pure tetrahedral grids have been generated, the hybrid grids can be generated fully
automatically, without any user interference. Generally speaking, the tetrahedral grids may
also be generated automatically using advancing front method or Delaunay method.
➢ The grid quality, especially in the boundary layer, is much better than that of pure
unstructured grids, which is very crucial for viscous flow simulations.
➢ The smoothness of grids from the boundary layer to the outer flow field is much better.
Figure 9.21 Initial Hybrid Grids and Coarsen Grids Wing – Courtesy of [Laiping et al.]
211
singular coarser grids will be located out of the cell faces themselves. This kind of ‘singular’ situation
will deteriorate in 3D cases, which will result in failure of multigrid computing, because the
interpolation operator should be carried out during V-type iterations. If we use an anisotropic
agglomeration approach, the quality of coarser grids will be improved very well (see Figure 9.21
(c)) to benefit the viscous flow simulation with multigrid computing algorithm.
The concept of anisotropic agglomeration is introduced by Mavriplis367 for cell-vertex finite volume
method and further developed by Refs.368-369 for cell-centered finite volume method. However, for
arbitrary hybrid grids, we still meet some problems because the multigrid iteration is so sensitive to
the shape of multi-level coarsen grids. If the quality of coarser grid is not good enough, the
accelerating performance of multigrid iteration will be attenuated. In order to improve the quality of
coarser grid by agglomeration, an improved anisotropic agglomeration approach is developed in this
paper. The details are listed as follows:
1 Check the cell property (isotropic or anisotropic cells). The checking criterion is the same as
that of Step 1 in hybrid grid generation above.
2 Agglomerate the surface triangles in a pseudo-2D manner. The surface triangles are
agglomerated with a node-based agglomeration approach. In order to ensure the smoothness
and the quality of coarser surface grids, the following two criteria are considered:
• If two triangles are located on the two separated sides of a sharp edge (For example,
the trailing edge of a wing, the joint-line of fuselage and wing, see [Laiping et al.]370),
the two triangles cannot be agglomerated.
• If there are some isolated non-agglomerated triangles after first-round
agglomeration, they should be agglomerated into the neighboring cells to improve the
smoothness.
3 Agglomerate the anisotropic prism grids layer-by layer with an analogy ‘advancing layer’
method. Advancing in the normal direction from each agglomerated surface coarser grids
(Two initial layers are integrated into one layer), then the prism grids in the boundary layer
are agglomerated layer-by-layer to ensure the semi-structured property as the initial finest
grids.
4 Agglomerate the isotropic grids in the outer flow field. In this step, we use a node-based
agglomeration approach, which means agglomerating all the non-agglomerated cells
connecting to a node. However, there are some special or ‘singular’ cases during
agglomeration, see [Laiping et al.]. For these cases, the agglomeration is not permitted.
367 Mavriplis DJ. Viscous flow analysis using a parallel unstructured multigrid solver. AIAA J 2000.
368 Daniel G, Sreenivas K. Parallel FAS multigrid for arbitrary Mach number, high Reynolds number unstructured
flow solver. AIAA Paper 2006-2821; 2006.
369 James L, Nishikawa H. A critical study of agglomerated multigrid methods for diffusion on highly stretched
Figure 9.22 Initial Hybrid Grids and Coarsening Grids over 30P30N Airfoil Wing – Courtesy of [Laiping
et al.]
Spaid FW, Lynch FT. High Reynolds number multi-element airfoil flow field measurements. AIAA 1996.
371
Zhang Laiping, Zhao Zhong, Chang Xinghua, He Xin, “A 3D hybrid grid generation technique and a
372 372
span length of the wing in z-direction). The agreement with the experimental data373 much better
than those by inviscid flow simulation (see [Laiping et al.]374).
(a) Initial Hybrid Grid (b) Second-Level (c) Close Up View of the
Boundary
Figure 9.24 Initial Hybrid Grids and Coarsening Grids over ONERA M6 Wing – Courtesy of [Laiping et
al.]
373 Schmitt V, Charpin F. Pressure distributions on the ONERA-M6-Wing at transonic Mach numbers, experimental
data base for computer program assessment. Report of the Fluid Dynamics Panel Working Group 04, 1979.
374 Zhang Laiping, Zhao Zhong, Chang Xinghua, He Xin, “A 3D hybrid grid generation technique and a
grid is only half of the pure unstructured grids. These results demonstrate that the hybrid grid
technique is superior indeed to the pure unstructured grid approach. When M∞ = 0.75 and α = 1.0
degree, the pressure coefficient distributions at three typical sections (see Figure 9.28 (a)) are
shown in Figure 9.28 (b)–(d), where z/b =15.0 %, 33.1 % and 63.8 %, respectively.
The present results are marked as USTAR. The results by others378-379 are also plotted in the same
figures. It can be seen that the present results are very similar to the best results by USM3D. The flow
separation pattern on the leeward surface of the wing is shown in Figure 9.28 (a), meanwhile the
results by UG3380 and the experimental oil-flow pattern are shown in Figure 9.28 (b-d). The size of
the separation zone is still larger than those by experiment381 but is slightly better than that by UG3.
Figure 9.25 Close-up Views of Hybrid Grids After Agglomeration Wing – Courtesy of [Laiping et al.]
378 Mavriplis DJ. Drag prediction of DLR-F6 using the turbulent Navier-Stokes calculations with multigrid. AIAA
Paper 2004-397; 2004.
379 Lee-Rausch EM, Mavriplis DJ. Transonic drag prediction on a DLR-F6 transport configuration using
Figure 9.26 Aerodynamic Force Coefficients for Different Angles of Attack (M∞ = 0.75) Wing –
Courtesy of [Laiping et al.]
382Zhang Laiping, Zhao Zhong, Chang Xinghua, He Xin, “A 3D hybrid grid generation technique and a
multigrid/parallel algorithm based on anisotropic agglomeration approach”, Chinese J of Aeronautics, 2013.
216
Figure 9.27 Hybrid Grids over DLR-F6-WBNP Configuration Wing – Courtesy of [Laiping et al.]
217
Figure 9.28 CP Distributions at Three Typical Sections (M = 0.75, α = 1.0 deg) Wing – Courtesy of
[Laiping et al.]
218
383 Mavriplis, DJ:” Unstructured Grid Techniques”, Annual. Rev. Fluid. Mech. 1997 by Annual Reviews Inc.
384 Mavriplis DJ., “Accurate multigrid solution of the Euler equations on unstructured
and adaptive meshes”, AIAA.
385 Mavriplis DJ., “Adaptive mesh generation for viscous flows using Delaunay triangulation”, J. Comp. Phys. 1990.
386 Weatherill NP, Hassan O, Marcum DL, “Calculation of steady compressible flow fields with the finite-element
the Unstructured Grid Adaptation Working Group “, 26th International Meshing Roundtable , 2017.
219
meshes, a hierarchical element subdivision approach has been adopted388. As described before, one
must ensure that a compatible refinement pattern is obtained on all elements of the mesh if a valid
refined meshes obtained. This technique can be applied to fully tetrahedral meshes, as well as to any
hybrid mesh containing mixtures of tetrahedral, pyramids, prisms and hexahedra. The resulting
meshes can be employed by the multigrid solver described in389 without modification. In order to
implement this technique on mixed element meshes, the various allowable subdivision types for each
element type must be defined. The hierarchical rules required to prevent the degeneration of the grid
quality with successive adaptation levels must also be constructed.
For tetrahedral elements, the subdivision rules have already been well formulated in the literature.
We allow only three basic subdivision types: A tetrahedron may be divided into 2 children, 4 children,
or 8 children. The two former cases result in anisotropic refinement, while the last case produces an
isotropic refinement. In order to prevent the degeneration of grid quality, any anisotropic children
may not be refined further. If any such cells require refinement, they are removed, the parent cell is
isotopically refined, and the resulting isotropic children may then be further refined. When limiting
the possible refinement types. This is achieved by adding refinement points along the all
appropriable edges on all elements which are flagged as having non-valid refinement pattern. Since
the addition of a refinement point to an edge affects all elements which contain the edge, the process
is applied iteratively, until all resulting elemental refinement are valid and no further points are
required.
The isotropic regiment of a hexahedra element results in eight similar but smaller hexahedral
elements. However, anisotropic refinement of a hexahedral element results in children which may
consist of hexahedra, pyramids, prisms and tetrahedral. By applying the same hierarchical rules as
described for tetrahedral meshes we can ensure that lese elements will never be refined further.
Instead, if further refinement ill these regions is desired, such elements are deleted and that parents
refined into eight smaller hexahedra. Thus, for fully hexahedral meshes, additional element may only
be the boundaries between refined and non-refined regions, or more generally, between two regions
which differ by one refinement level. The task of implementing adaptive mesh subdivision elements
other than tetrahedral consists in defining the minimum number of allowable subdivision types. On
the one hand, it, is desirable to limit the number of subdivision types for complexity reasons. On the
other hand, a minimum number of subdivision types must be implemented to allow for compatible
subdivision types to be attained on all elements without, incurring excessive additional
refinement390.
388 388 Mavriplis D.J., ”Adaptive Meshing Techniques For Viscous Flow Calculations On Mixed Element
Unstructured
Meshes”, NASA Contract No. NAS1-19480, May 1997.
389 l). J. Mavriplis and Venkatakrishnan. A unified multigrid solver for the Navier-Stokes equations on mixed
Geometries. 9th AIAA Computational Fluid Dynamics Conference, Buffalo, NY, June 1989. AIAA-89-1930-CP.
220
392Hiroshi Abe, “Blocked Adaptive Cartesian Grid FD-TD Method for Electromagnetic Field with Complex
Geometries”, International Conference on Modeling and Simulation Technology, Tokyo, JAPAN, 2011.
221
0’ (Figure 10.3)393.
393 Each black circles indicates leaf nodes in the tree structure and they correspond to the cells as is shown with
the numbers.
222
Vj
cos (θi j ) =
⃗|
|V
Eq. 10.2
394 M.J. Aftosmis. Solution adaptive Cartesian grid methods for aerodynamic flows with complex geometries, 1997.
395 See Previous.
223
If θ in a cell exceeds a predefined angle threshold, then the cell is tagged for division. This procedure
for division is very simple and robust. One can have adaptive Cartesian cells automatically. Further
examples provided a 2D backward step (see Figure 10.5).
10.2.3 Uniform AMR
The simplest refinement anyone can think of is to divide all cells in the domain. This is referred to as
"Uniform Refinement". Although it does improve the solution vastly, it is easy to realize that we are
going for a huge unwanted effort in doing so. For e.g., in the far field region of an airfoil, cell division
is not bringing in any improvement because the flow such as a shock-boundary layer interaction. To
achieve the goal of mesh adaptation, the refinement is done at "selected" regions alone based on
certain criterion. This is referred to popularly as AMR or Adaptive Mesh Refinement. It is to be
remarked that AMR does not only encompass division of cells into smaller ones (Refinement), but
also the agglomeration of smaller cells into a larger one (De-Refinement or coarsening), when the
need arises.
10.2.3.1 Transient Inviscid Flow
Considers the solution of an internal
transient inviscid supersonic flow (Lyra et
al.)396. The geometry consists in a wind
tunnel with a step and the inflow boundary
condition consists of a uniform Mach 3.0
flow with angle of attack 0°. At the right
boundary the flow is let free to leave the
domain and along the walls, reflecting
boundary conditions are applied. During
the transient adaptive procedure several
adapted meshes are generated along the
time integration according to the error
analysis. Figure 10.6 shows some
selected meshes: mesh, is the third mesh
generated during the transient adaptive
process, and meshes and are meshes
generated before and after the time when
the shock starts to be reflected from the top
boundary. The mesh refinement is clearly
following the physical features of the flow.
The adaptive algorithm try to obtain an
“optimal” mesh for a pre-defined number
of elements. The target number of
elements for this analysis was 1000 and a
limited aspect ratio of 4 was considered. Figure 10.6 Selected Initial Meshes for the Transient
The number of elements generated in the Adaptive Procedure (Meshes 3, 20, 27 and 29)
meshes shown was 620, 977, 994 and
1010, and the corresponding number of nodes was 638, 1007, 1029 and 1047, showing that the
procedure obeyed well the imposed constraints.
396Lyra, P.R.M., de Carvalho, D.K.E., Willmersdorf, R.B. and Almeida, R.C.C., 2002, “Transient Adaptive Finite
Element Analysis of Compressible Flows”, WCCM'2002-Proceedings of the 5th World Conference on
Computational Mechanics, Vienna-Austria.
224
10.2.4 Case Study 1 - An Adaptive Hybrid Mesh Generation Method for Complex Geometries
An adaptive hybrid mesh generation method is described by [Cameron Thomas Druyor JR.]397 to
automatically provide spatial discretization suitable for 2D solver applications. This method employs
a hierarchical grid generation technique to create a background mesh, an extrusion-type method
for inserting boundary layers, and an unstructured triangulation to stitch between the boundary
layers and background mesh. This method provides appropriate mesh resolution based on geometry
segments from a file, and has the capability of adapting the background mesh based on a spacing field
generated from solution data or some other arbitrary source. By combining multiple approaches to
the grid generation process, this method seeks to benefit from the strengths of each, while avoiding
the weaknesses of each.
10.2.4.1 Mesh Stitching
The hybrid method, such as AMR, generates several different meshes which must be assembled into
a single mesh before exporting to a mesh file that can actually be used by a solver. There are two main
elements to accomplishing this task, removing elements from the background mesh that are not
to be part of the final mesh, and bridging the gap between the viscous and inviscid meshes.
10.2.4.1.1 Removal of Background Mesh Elements
Removal of the unnecessary elements of the inviscid mesh is done in two stages. First, all elements
that violate a boundary segment, or come close to violating a segment within a tolerance, are removed
from the mesh. This partitions the mesh into contiguous blocks that are wholly inside, or outside the
computational domain. The remaining voxels are marked in or out with a recursive flood-fill
algorithm. It is important to note that flooding the contiguous blocks recursively can result in a stack
over flow for large meshes because of the number of function calls that get pushed onto the call stack.
There are two options for avoiding this situation: increase the maximum stack size, or develop a
replacement routine that applies the recursive algorithm without making recursive function calls.
The proposed method takes the second approach, utilizing a queue style structure. The unmarked
cell checks to see if its neighbors have been marked. Each neighbor that is not yet marked is marked
and then pushed onto the queue. Then, while the queue is not empty, the first element of the queue
is popped off. This element checks for neighbors that have not yet been marked, marks them, and
pushes them onto the queue. When there are no voxels (i.e., cell) left in the queue, the algorithm
searches for another unmarked voxel to start the process again, until there are no unmarked voxels
left.
10.2.4.2 Triangulation
Once the voxel removal is complete, there is a gap between the viscous region and the background
mesh (in the absence of a viscous mesh, there is a gap between the geometry and the background
mesh). This region must be filled with cells before a valid mesh can be created. This is done in two
steps. First a list of unique nodes and boundaries for the region to be triangulated are created. The
boundaries consist of the exposed edges of the voxel front, the outermost edges of the viscous mesh,
and any exposed geometry segments, and the unique node list contains each point that is part of any
of those edges. Once these are packaged up properly, they are passed to a Delaunay triangulation
method written by Dr. Steve Karman, which returns a list of triangles that fill the gap region.
10.2.4.3 Test Cases
10.2.4.3.1 30P30N Multi-Element Airfoil
The proposed method has a robust viscous layer production method that can create viscous layers
without introducing negative elements caused by crossed normal. To showcase this, the 30P30N
397Cameron Thomas Druyor JR, “An Adaptive Hybrid Mesh Generation Method for Complex Geometries”, A
Thesis Submitted to the faculty of the University of Tennessee, Chattanooga in Partial Fulfillment of the
Requirements for the Degree of Master of Science in Computational Engineering, 2011.
225
airfoil was chosen. shows the far field view of the mesh, demonstrating that it spans the
computational domain and that the square far field boundaries are preserved. Zooming in on the slat
element, Figure 10.7 shows that viscous layers are created on the slat and the leading edge of the
wing. Taking a closer look at the bottom trailing edge of the slate shows that the expected issues are
present at the sharp point, but the mesh is valid in the region. Note that the elements in the
stitching region are generally of high quality and appropriate size; it is only the areas where the
viscous elements are skewed that the cell size gradation changes drastically between the viscous
elements and the stitching elements. Further discussion are available in [Druyor JR.]398.
398 Cameron Thomas Druyor JR, “An Adaptive Hybrid Mesh Generation Method for Complex Geometries”, A
Thesis Submitted to the faculty of the University of Tennessee, Chattanooga in Partial Fulfillment of the
Requirements for the Degree of Master of Science in Computational Engineering, 2011.
399 David A. Venditti and David L. Darmofal, “Grid Adaptation for Functional Outputs: Application to Two
criteria, with the regions of mesh refinement corresponding to regions of significant flow activity
where it is desirable to have increased spatial resolution (see Figure 10.10 and Figure 10.9). How
these criteria are chosen has important consequences for the overall operation of the adaptive solver.
The complete adaptive solver may be thought of as consisting of three parts,
Figure 10.9 Grid Adaption using Supersonic Figure 10.10 NACA 0012 Transonic test case:
Flow for an Airfoil (bow shock) M∞ = 0.8, α=1.25
Thus the adaptation and integration operations can be thought of as two distinct processes that are
applied to the central data structure. The connectivity outlined above is sufficient to completely
specify a given mesh, but it does not contain the connectivity required to construct the adaption
227
hierarchy. For this some additional information is required, which in the case of elements and edges
consists of storing parent and child addresses. The bisection of a parent edge by the addition of a
node to the mesh results in the creation of two child edges. To extend the, sequence of events involved
in a complete adaption and integration of the mesh is shown at first, mesh elements are flagged for
adaption. This results in each mesh edge being targeted either for refinement, de-refinement or no
action data structure to include numerical parameters such as the flow variables.
10.2.5.1 Adaption Control Mechanism
The Euler flow solver is combined with the adaptive algorithm by flagging regions of the mesh with
(low) high density gradients for de-refinement, with the calculation of local flow gradients being
performed across element faces. Where the face normal density gradient falls below or exceeds a
chosen tolerance, the edges on the face are flagged to de-regime. In addition, a `safety layer' of
refinement flagging is employed to ensure the full capture of solution discontinuities, which is the
principal concern for this application. Likewise, a maximum mesh refinement depth is also specified.
Coupling the adaption algorithm to the solution integrator is managed in a similarly straightforward
manner. Figure 10.10 depicts a NACA airfoil in a transonic flow, while Figure 10.9 shows the same
geometry when placed in supersonic flow with a bow shock.
10.2.6 Case Study 3 – Parallel Implementation of Unstructured Mesh Refinement of Duct Flow
The simultaneous alteration of the decomposed domains of an unstructured mesh presents a number
of challenges400. In parallel, each processor operates on its own partition, concurrent with and
independent of the others. Previous work in parallel mesh refinement401-402 demonstrated methods
in which adaptation was performed on each processor, and patterns for cell subdivision were
exchanged across inter-processor boundaries, ensuring a conforming mesh. Coarsening the inter-
processor boundary was not a concern, nor was the possible motion of the mesh boundaries.
Therefore the first issue that arises in parallel adaptation is how to treat the inter-processor
boundaries. Rather than modify these faces, the inter-processor boundaries are shifted using a cell
migration technique. The inter-processor faces and adjacent cells then become interior faces and
interior cells, which may be readily modified through a second adaptation pass. In the second pass
only the former inter-processor boundary region needs to be coarsened, refined, or smoothed, as the
remainder of the mesh is already consistent with the prescribed point spacing illustrates the two-
pass approach for solution-based coarsening and refinement of supersonic flow entering a duct. The
original mesh partitions, shown in Figure 10.11-(1), are independently coarsened and refined to
produce the adapted mesh of (2). Note that the inter-processor boundaries are not modified, which
leaves a region of the mesh that still requires adaptation. Several layers of cells are migrated from
the right processor to the left, as seen in (3). The inter-processor boundary is now to the right of its
original location. A second coarsening and refinement pass treats the former inter-processor faces
and adjacent cells, producing the final adapted grid of (4).
A consequence of the cell migration approach is that the shape and extent of the decomposed
domains change. The cell migration process may introduce new pairs of adjacent domains that did
not initially communicate. Similarly, pairs of processors that once shared common nodes, edges, and
faces may become disconnected as a result of cell migration. Updating the inter-processor
communication schedule proceeds in two stages. First, the current communication lists are updated
400 Cavallo, P.A., Sinha, N., and Feldman, G.M.,” Parallel Unstructured Mesh Adaptation For Transient Moving
Body And Aeropropulsive Applications”, Combustion Research and Flow Technology, Inc. (CRAFT Tech), PA.
401 De Keyser, J., and Roose, D., “Run-Time Load Balancing Techniques for a Parallel Unstructured Multi-Grid Euler
Solver with Adaptive Grid Refinement”, Parallel Computing, Vol. 21, pp. 179-198, 1995.
402 Flaherty, J.E., Loy, R.M., Shephard, M.S., Szymanski, B.K., Teresco, J.D., and Ziantz, L.H., “Adaptive Local
Refinement with Octree Load Balancing for the Parallel Solution of Three-Dimensional Conservation Laws”,
Journal of Parallel and Distributed Computing, Vol. 47, pp. 139-152, 1997.
228
for each pair of adjacent domains. If no common nodes are found between two domains, the
communication is removed from the cycle. The second stage involves checking for any new
communication pairs introduced as a result of migration. In addition, one can no longer refer to the
original decomposed grid to obtain data for solution transfer or for establishing point spacing after
the coarsening phase. This issue is remedied by recomposing the global grid arrays at the start of the
mesh adaptation process, such that the list of global vertex coordinates, solution vectors, and
computed point spacing may be readily available to all processors.
10.2.7 Case Study 4 – Generic Transonic Store Release403
The next application considered is the separation of a finned store from a wing/pylon configuration
at Mach 1.2. Inviscid flow is assumed for this tetrahedral grid. A constant ejection force is applied
over the first 0.1 seconds of the simulated
(3) Cell Migration into inter Figure 10.12 Store position, orientation, and surface
processor boundary (4) Second Addaption pass
pressures at selected points in trajectory
403 Cavallo,
P.A., Sinha, N., and Feldman, G.M.,”Parallel Unstructured Mesh Adaptation For Transient Moving Body
And Aeropropulsive Applications”, Combustion Research and Flow Technology, Inc. (CRAFT Tech), PA 18947.
229
separation. After the initial ejection stroke, the motion of the store is provided by general 6-degree-
of-freedom (6-DOF) equations of motion using the current integrated surface pressure distribution.
Gravitational acceleration is also included. Figure 10.12 provides an overview of the simulation. A
total of ten adaptations were performed at regular intervals. The unstructured grid is comprised of
approximately 2.7 M cells, and is decomposed on 16 processors. In this image, the store is colored by
the current pressure distribution at each of the four instants shown, and the black lines indicate the
changing inter-processor boundaries on the store surface resulting from cell migration and load
rebalancing. As it translates, the store yaws nose away from the symmetry plane and pitches nose
down. The surface pressure distribution reflects the changing local angle of attack and sideslip angle
of the store.
As the distance between the store and pylon surfaces increases, the mesh distortion becomes less
severe. With each successive adaptation, the deformation measure reduces to a minimum value
greater than the previous minimum. This indicates that mesh movement may likely be applied for a
longer period of time before adaptation is warranted. Such strategies and tradeoffs are yet to be
investigated. The evolution of the unstructured mesh as the store falls away is depicted in Figure
10.13 where inter-processor boundaries are highlighted in red. Through the adaptive coarsening
and refinement procedures, overall mesh quality is maintained, and an appropriate cell distribution
is provided as the distance between the store and pylon increases. Although Figure 10.13 illustrates
a slice through the mesh, one can readily see the migration and rebalancing of the inter-processor
boundaries404 as left(before), right(after) redistribution. To improve the partitioning through high
aspect ratio cells, FLUENT® recently device a partitioning method based on grouping of the Laplace
coefficients as shown in Figure 10.14. In addition, it improves convergence rate for cases with
highly stretched cell.
10.2.8 Case Study 5 - Adaptive Hybrid Mesh Refinement for Multiphysics Applications405
We have developed methods for optimizing meshes that are comprised of elements of arbitrary
polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive
meshing technology tailored to application areas relevant to multi-physics modeling and simulation.
Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine
the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either
404 Cavallo,P.A., Sinha, N., and Feldman, G.M., ”Parallel Unstructured Mesh Adaptation For Transient Moving Body
And Aeropropulsive Applications”, Combustion Research and Flow Technology, Inc. (CRAFT Tech), Pipersville, PA.
405 Ahmed Khamayseh and Valmor de Almeida, “Adaptive Hybrid Mesh Refinement for Multiphysics Applications”,
the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic
or anisotropic meshes to equidistributional weighted geometric and/or solution error function. We
have successfully introduced r-h adaptively to a least-squares method with spherical harmonics
basis functions for the solution of the spherical shallow atmosphere model used in climate modeling.
In addition, application of this technology also covers a wide range of disciplines in computational
sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation.
10.2.8.1 Adaptive Hybrid Mesh Optimization
The principle objective of this paper is to present an overview of current meshing efforts and
development at Oak Ridge National Laboratory. Our capability is geared for generating high-
quality adaptive meshes for petascale applications. In this work, we have researched and developed
tools and algorithms for the generation and optimization of adaptive hybrid meshes using finite-
volume discretization approach. The hybrid mesh approach attempts to combine the advantages of
both structured and unstructured meshing strategies. The prismatic and hexahedral elements are
used in regions of high solution gradients, and tetrahedral are used elsewhere with pyramids used at
the boundary between these two element categories to provide a transition region. In addition, the
polyhedral/icosahedral meshes are often the best choice of solving symmetric computational
problems (e.g., inertial confinement fusion and climate modeling). They have the properties of
producing symmetric higher order orthogonal meshes and do not introduce artificial geometric
interfaces. The geometry of the mesh and its symmetries are matched to the analytical and numerical
methods used to solve the governing equations. Furthermore, hexahedral/prismatic layers close to
wall surfaces exhibit good orthogonality and clustering capabilities characteristic of structured mesh
generation approaches. The mesh example demonstration in Figure 10.15 showcase the generation
of hybrid surface and volume meshes on symmetric multi-region geometries406. The geometry of the
mesh and its symmetries are matched to the analytical and numerical methods used to solve the
governing equations.
Figure 10.15 Hybrid Icosahedra Surface Mesh (left) and Multi-Material Hybrid Volume Mesh (right) –
(Courtesy of Khamayseh and Almeida)
The principle objective of this paper is to present an overview of current meshing efforts and
development at Oak Ridge National Laboratory. Our capability is geared for generating high-
quality adaptive meshes for petascale applications. In this work, we have researched and developed
tools and algorithms for the generation and optimization of adaptive hybrid meshes using finite-
volume discretization approach. The hybrid mesh approach attempts to combine the advantages of
both structured and unstructured meshing strategies. The prismatic and hexahedral elements are
used in regions of high solution gradients, and tetrahedral are used elsewhere with pyramids used at
the boundary between these two element categories to provide a transition region. In addition, the
polyhedral/icosahedral meshes are often the best choice of solving symmetric computational
problems (e.g., inertial confinement fusion and climate modeling). They have the properties of
producing symmetric higher order orthogonal meshes and do not introduce artificial geometric
interfaces. The geometry of the mesh and its symmetries are matched to the analytical and numerical
methods used to solve the governing equations. Furthermore, hexahedral/prismatic layers close to
wall surfaces exhibit good orthogonality and clustering capabilities characteristic of structured mesh
generation approaches. The mesh example demonstration in Figure 10.15 showcase the generation
of hybrid surface and volume meshes on symmetric multi-region geometries407. The geometry of the
mesh and its symmetries are matched to the analytical and numerical methods used to solve the
governing equations.
Our approach to the meshing problem is to utilize tools and technologies developed by the center of
Interoperable Technologies for Advanced Petascale Simulations (ITAPS) into our integrated
geometry, meshing and adaptively server (GMAS). The ITAPS center is one of the mathematics
Enabling Technologies Centers (CET) in the Department of Energy's Scientific Discovery through
Advanced Computing (SciDAC) program. The center's focus is on developing advanced scalable
interoperable software associated with geometry, mesh, and field manipulation. It also provides the
necessary meshing tools to reach new levels of understanding through the use of high-fidelity
calculations based on multiple coupled physical processes and multiple interacting physical scales.
GMAS is a code intended to integrate scientific software and provide geometry, meshing and
adaptively services for PDE solvers of coupled Multiphysics applications without exposing details of
the underlying libraries. GMAS is currently used to handle multiple meshes for multiple PDE solvers
Figure 10.16 HTTR Multi-Material Geometry, Initial Coarse Mesh (left), Refined Mesh (right) ) – (Courtesy
of Khamayseh and Almeida)
for a given geometry in a coupled application, and it provides the basic infrastructure to allow the
application to evaluate fields over multiple meshes. It has been used in Multiphysics applications to
provide meshing services for a neutron transport simulation code and a solvent extraction fluid flow
code in development. The following example (Figure 10.16) exhibits coarse and refined anisotropic
meshes generated using GMAS. The fine mesh is used to capture boundary layer flow and heat flux
and the coarse mesh is needed for neutronics in the coolant channels of high-temperature test reactor
(HTTR).
10.2.8.2 Hybrid Adaptive Meshing
Our ongoing meshing research and development concentrates on hybrid mesh adaptation strategies,
along with mesh optimization. In certain Multi-physics applications, the size of mesh at a given
location should be selected to resolve the smallest physics length scale at that point. Too few mesh
elements result in a locally incorrect solution; whereas, too many mesh cells slow the calculation
needlessly. The quality of the solution also depends on other mesh characteristics, such as, element
shapes and connectivity, smoothness and “impedance” requirements, element orthogonality,
anisotropic elements to match anisotropic physics, and boundary representation requirements. We
have developed a hybrid finite volume-based mesh generator for r-adaptively with certain emphasis
on climate modeling. We employ conformal mapping to derive the elliptic PDEs models for the
optimization and adaptation of hybrid surface meshes. However, an algebraic method is used in the
case of combined h-p adaptively wherein the degree p of the polynomial basis functions can be
adapted to the features of the field quantities. The following demonstrated examples (Figure 10.17)
exhibit the generation of r-h adapted hybrid surface meshes for climate modeling. It has been shown
that mesh adaptation can reduce simulation error in prediction of the dynamics of the climate system.
Figure 10.17 Orography field (left), r-adaptively (center) and h-adaptively (right) for climate modeling
) – (Courtesy of Khamayseh and Almeida)
Applications to this capability also include field transformation and mapping across multiple meshes.
In particular, the generation and smooth adaptive grid transformations for resolving orography
(earth surface height) and fine-scale processes in climate modeling. Orography plays an important
role in determining the strength and location of the atmospheric jet streams. Its impact is most
pronounced in the numerical simulation codes for the detailed regional climate studies. In addition,
orography is a crucial parameter for prediction of many key climatic dynamics, elements, and
moisture physics, such as rainfall, snowfall, and cloud cover. The phenomenon of climate variability
is sensitive to orographic effects and can be resolved by the generation of finer meshes in regions of
high altitude. Resolving orography produces a more accurate prediction of wetter or dryer seasons
in a particular region. And moreover, orography defines the lower boundary in general circulation
233
models. The following example (Figure 10.18) shows a very dense orography filed on planar
uniform mesh with two kilometer gridded resolution (Figure 10.18- top). The initial field data size
was two gigabits and it was obtained from http://www.ngdc.noaa.gov/mgg/topo/globe.html. We
have successfully introduced h-p adaptively to a least-squares method with spherical harmonics
basis functions for field mapping and mesh adaptation. The end mesh (Figure 10.18-bottom) is
much coarser at the sea level (50-kilometers) and finer at the high altitude regions (1-kilometers)
with only a fraction of the original field data size. Moreover, the orography field was globally
preserved to very small accuracy. For a full detailed presentation of this adaptive meshing approach
we refer the reader to 408-409.
Figure 10.18 Coupled orography field transfer with h-adaptively. Planar orography field (top),
h-adapted surface mesh (bottom-left) and close-up view of the mesh (bottom-right)
408 Kahamyseh A., de Almeida V.F. and Hansen G. “Hybrid Surface Mesh Adaptivity for Shallow Atmosphere
Simulation”, ORNL/TM-2006-28.
409 de Almeida V. F., Khamayseh A. K. and Drake J. B. “An h-p Adaptive Least-Squares Cartesian Method with
Spherical Harmonics Basis Functions for the Shallow Atmosphere Equations ORNL/TM-2006-26.
234
geometries and adapt (h-adaptively) to areas of steep solution gradient, notably at the walls of the
vortex generator. Such very large meshes can only be created with the help of parallel computing.
Our approach is to generate an initial mesh that resolves the surface geometry at any practical
tolerance in a single processor while keeping the interior volume mesh coarse. In practice, such
meshes can only be of unstructured type and there is a risk that many elements will become singular
or of unacceptable aspect ratio. Next, we leverage tools from the ITAPS center into GMAS to partition
the mesh, distribute the data, refine/improve the quality of the distributed mesh in parallel, and
balance the load of the new mesh. Existing functionally of GMAS already provides the partition and
distribution of the data (Figure 10.19), we are currently concentrating on developing of the parallel
mesh refinement, data distribution and load balancing.
Figure 10.19 Meshing and Partitioning of Centrifugal Contactor ) – (Courtesy of Khamayseh and
Almeida)
10.2.8.4 Conclusions.
The paper presents an overview of the current tools being developed by ITAPS and GMAS for the
generation, optimization, and adaptation of hybrid meshes. In addition, the research in this paper is
involved in the development of r-h-p adaptive technology tailored to application areas relevant to
other simulation fields. The r-h-p adaptive meshing approach and its underlying methods can be
attractive to many application areas when solving three-dimensional, multi-physics, multi-scale, and
time-dependent PDE's. This method builds on r-h-refinement/coarsening, p-refinement,
interpolation, and error estimation applied to climate modeling and astrophysics simulation. For
additional information, please refer to [Khamayseh and Almeida]410.
410 Ahmed Khamayseh and Valmor de Almeida, “Adaptive Hybrid Mesh Refinement for Multi-physics
Applications”, Journal of Physics: Conference Series 78 (2007) 012039.
411 Christopher J. Roy, “Strategies for Driving Mesh Adaptation in CFD (Invited)”, AIAA 2009-1302.
235
solution to the discrete equations and the exact solution to the governing partial differential
equations. Discretization error is the most difficult type of numerical error to estimate and is usually
the largest of the numerical error sources, which also include iterative error, round-off error, and
statistical error (where relevant). There are a number of different approaches for estimating
discretization error, but they all rely on the underlying numerical solution (or solutions) being in the
asymptotic range with regards to either the truncation error or the discretization error. In addition
to the importance of estimating the discretization error, we also desire methods for reducing it.
Applying uniform mesh refinement (required for extrapolation-based discretization error estimation
such as Richardson extrapolation) is not the most efficient method for reducing the discretization
error. Since uniform refinement, by definition, uniformly refines over the entire domain, it generally
results in meshes with highly refined cells/elements in regions where they are not needed. For 3D
CFD applications, each time the mesh is refined by grid doubling in each coordinate direction, the
number of cells/elements increases by a factor of eight. Thus uniform refinement for reducing
discretization error can be extremely expensive. Targeted local refinement, or mesh adaptation, is a
much better strategy for reducing the discretization error. There have been several extensive reviews
of mesh adaption approaches for CFD (e.g., see [Baker]412 and [McRae]413); however much of this
work has focused on methods for actually performing the adaption rather than the approach for
driving the mesh adaptation. This paper examines several different criteria for driving a mesh
adaptation scheme.
10.3.1.1 Feature-Based Adaption
The most widely-used approach to grid adaptation employs feature-based methods. These methods
often use solution gradients, solution curvature, or even identified solution features to drive the
adaptation process. Feature based adaptation often results in some feature being over-refined while
other features are not refined enough. In some cases, gradient-based refinement can actually increase
the solution error. An example of the failure of feature-based adaptation is given by [Dwight]414 for
the inviscid transonic flow over an airfoil using an unstructured finite-volume discretization. Figure
10.20 (a) shows the discretization error in the drag coefficient as a function of the number of cells.
Uniform (global) adaptation shows second order convergence on the coarser grids, then a reduction
to first order on the finer grids, likely due to the presence of the shock discontinuities. Adaptation
based on solution gradients initially shows a reduction in the discretization error for the drag
coefficient, but then subsequent adaptation steps show an increase in the discretization error. The
adjoint-based artificial dissipation estimator gives the best results. The adapted grids for the latter
two cases are given in Figure 10.20 (b-c). While the gradient-based adaptation refines the shock
waves on the upper and lower surface as well as the wake, the adjoint-based adaptation also refines
near the surface and in the region above the airfoil containing acoustic waves that impinge on the
trailing edge.
412 Baker, T. J., “Mesh Adaptation Strategies for Problems in Fluid Dynamics,” Finite Elements in Analysis and
Design, Vol. 25, 1997, pp. 243-273.
413 McRae, D. S., “r-Refinement Grid Adaptation Algorithms and Issues,” Computer Methods in Applied Mechanics
Application to Mesh Adaptation,” Journal of Computational Physics, Vol. 227, No. 5, 2008, pp. 2845-2863.
236
(a)
Figure 10.20 Discretization Error in the Drag Coefficient for Transonic Flow over an Airfoil
(Reproduced from Dwight)
discretization error and is not the ideal driver for mesh adaption. Furthermore, recovery-based
adaptation, while possible for the finite element method, may not be feasible for other discretization
methods which are not super-convergent.
10.3.1.3 Adjoint-Based Adaption
Another promising method for grid adaptation is the adjoint approach. Adjoint methods hold the
promise of estimating the local contribution of each cell or element to the discretization error in any
solution functional of interest (e.g., lift, drag, and moments), and can thus provide targeted mesh
adaption depending on the goals of the simulation. The main drawback for adjoint methods is their
complexity and code intrusiveness, as evidenced by the fact that adjoint-based adaption has not yet
found its way into commercial CFD codes. An example of adjoint-based mesh adaption in CFD is given
by [Venditti and Darmofal] who successfully applied the method in finite-volume form to inviscid and
viscous flow over airfoils at various Mach numbers. While adjoint methods hold much future promise
in the area of mesh adaption, they are beyond the scope of the current paper.
10.3.1.4 Truncation Error-Based Adaption
In broad terms, the truncation error is the difference between the partial differential equation and
its discrete approximation. As will be discussed in later, the truncation error provides the
contribution of the local element discretization (cell size, skewness, etc.) to the discretization error.
As such, the truncation error is a good indicator of where mesh adaptation should occur. The
general concept behind truncation error-based adaption is to equidistributional the truncation error
over the entire domain to reduce the total discretization error. For simple discretization schemes,
the truncation error can be computed directly. For more complex schemes where direct evaluation
of the truncation is difficult, an approach for estimating the truncation error is needed. Section VII
discusses two approaches for estimating the truncation error. Furthermore, in the finite element
method, a class of error estimators has been developed that rely on the residual. This residual is
found by inserting the finite element solution, which is made up of basis functions and corresponding
coefficients, into the original partial differential equation. As shown before, this residual can be found
in a similar manner as one approach for estimating the truncation error. Thus we expect there to be
a close relationship between residual-based adaption in the finite element method and truncation
error-based adaption with more general discretization schemes.
10.3.2 Current Approach for Performing Mesh Adaptation
Local solution adaption can be conducted by moving points from one region to another (r-adaption),
selectively refining/coarsening cells (h-adaption), or increasing/decreasing the order of accuracy of
the method (p-adaption). For general unstructured grid methods the h-adaption approach is the
most popular, while for structured grid methods the r-adaption approach is most often used. p-
adaption has not found widespread use for CFD problems415. In addition to mesh refinement, other
issues that should be considered when adapting a mesh are mesh quality and the alignment of the
mesh with key solution features. Since the focus here is on methods for driving mesh adaption and
not for performing adaption itself, we limit ourselves to a simple approach of r-adaption in one
dimension based on a linear spring analogy. Extensions to handle multiple dimensions are possible
based on a torsional spring, which serves to prevent skewing of the multi-dimensional cells. First, a
mesh adaptation function φi is created based on solution features (e.g., gradients, curvature),
discretization error, truncation error, etc., where i denotes the mesh node point (ordered from 1 to
N). A weighting function is then created from the mesh adaptation function as:
415Baker, T. J., “Mesh Adaptation Strategies for Problems in Fluid Dynamics,” Finite Elements in Analysis and
Design, Vol. 25, 1997, pp. 243-273.
238
Wi = |φi |q
Eq. 10.3
Where q is an exponent that is set to unity in the present work. This weighting function is used to
drive the mesh adaption process, with smaller values denoting a region for mesh coarsening and
larger values a region for refinement. This weighting function is then passed through a smoothing
algorithm to promote smoothness of the mesh adaptation. This smoothing algorithm includes 10
passes of the following smoothing operation for all interior points:
Examples of these different weighting functions applied to steady-state Burgers equation for a
Reynolds number of 32 are shown in Figure 10.21 for q = 1. Clearly each approach will produce
meshes with different adaptation characteristics. Steady-state Burgers equation for Reynolds
number416. The left figure depicts the exact solution and numerical solution using 33 uniformly
spaced nodes, while right shows an example weighting functions for the region -2 ≤ x ≤ 0 based on
416Dwight, R.P., “Heuristic A Posteriori Estimation of Error due to Dissipation in Finite Volume Schemes and
Application to .Mesh Adaptation,” Journal of Computational Physics, Vol. 227, No. 5, 2008, pp. 2845-2863.
239
solution gradients, solution curvature, discretization error (DE), and truncation error (TE).
10.3.3 Case Study - Mesh Adaption Results for 1D Burgers Equation (Re = 32)
In this section, different methods for driving the mesh adaption are analyzed as well as the case
without adaption (i.e., a uniform mesh). The four different methods for driving the mesh adaptation
are: adaption based on solution gradients, adaption based on solution curvature, adaption based on
the discretization error (DE), and adaption based on the truncation error (TE). Numerical solutions
to steady-state Burgers equation for Reynolds number = 32 are given in Figure 10.22(left) for the
uniform mesh and the four mesh adaption approaches, all using 33 nodes. The final local node
spacing is given in Figure 10.22 (right) for each method and shows significant variations in the
vicinity of the viscous shock.
Figure 10.22 Adaption Schemes Applied to Burgers Equation Left) numerical solutions and right) local
nodal spacing Δx.
240
flexible for adding or deleting nodes locally, the mesh redistribution approach is widely used to move
nodes toward solution features while the connectively of the mesh is maintained.
Although solution features are adapted by unstructured meshes relatively easily, there are two issues
needed to be addressed. One is the maintenance of valid elements. Hanging nodes can be created
during a mesh refinement process. Local refinement of hybrid meshes for viscous flow simulations,
which contain regular elements such as tetrahedral, prisms, pyramids and hexahedra, is difficult
without creating low-quality elements to eliminate hanging nodes. To overcome this issue, an
approach using generalized elements is promising424.
The other issue is the quality of resulting refined meshes. Stretched elements may affect solution
accuracy and cause a stiffness problem in numerical simulations. [Mavriplis]425 reports span wise
grid stretching, which is widely used in aircraft CFD simulations, may have substantial repercussions
417 Yasushi Ito, Alan M. Shih, Roy P. Koomullil and Bharat K. Soni, “A Solution-Based Adaptive Redistribution
Method for Unstructured Meshes”, Dept of Mechanical Engineering University of Alabama at Birmingham,
Birmingham, AL, U.S.A.
418 Ito, Y., Shum, P. C., Shih, A. M., Soni, B. K. and Nakahashi, K., “Robust Generation of High-Quality Unstructured
Meshes on Realistic Biomedical Geometry,” International Journal for Numerical Methods in Engineering, 2006.
419 Cavallo, P. A. and Grismer, M. J., “Further Extension and Validation of a Parallel Unstructured Mesh Adaptation
Package,” AIAA Paper 2005-0924, 43rd Aerospace Sciences Meeting and Exhibit, Reno, NV, 2005.
420 Senguttuvan, V., Chalasani, S., Luke, E. and Thompson, D., “Adaptive Mesh Refinement Using General
Elements,” AIAA Paper 2005-0927, 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 2005.
421 Soni, B. K., Thornburg, H. J., Koomullil, R. P., Apte, M. and Madhavan, A., “PMAG: Parallel Multiblock Adaptive
Grid System,” Proceedings of the 6th International Conference on Numerical Grid Generation in Computational
Field Simulation, London, UK, 1998, pp. 769-779.
422 Shephard, M. S., Flaherty, J. E., Jansen, K. E., Li, X., Luo, X., Chevaugeon, N., Remacle, J.-F., Beall, M. W. and
O’Bara, R. M., “Adaptive Mesh Generation for Curved Domains,” Applied Numerical Mathematics, 2005.
423 Suerich-Gulick, F., Lepage, C. and Habashi, W., “Anisotropic 3-D Mesh Adaptation for Turbulent Flows,” AIAA
Paper 2004-2533, 34th AIAA Fluid Dynamics Conference and Exhibit, Portland, OR, 2004.
424 Senguttuvan, V., Chalasani, S., Luke, E. and Thompson, D., “Adaptive Mesh Refinement Using General
Elements,” AIAA Paper 2005-0927, 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 2005.
425 Mavriplis, D. J., “Grid Resolution Study of a Drag Prediction Workshop Configuration Using the NSU3D
Unstructured Mesh Solver,” AIAA Paper 2005-4729, 23rd Applied Aerodynamics Conference, Toronto, 2005.
241
on overall simulation accuracy even at very high levels of resolution426. Since typical refinement and
redistribution algorithms for unstructured meshes create highly stretched tetrahedral around
solution features, the validation of the simulation process may be required. If a refined mesh does
not have elements that have too small or too large angles even near solution features, we do not need
to worry about these issues.
Here, we propose a solution-based redistribution method for unstructured volume meshes. The
structured mesh redistribution methods only allow nodes to move towards solution features, while
maintaining the mesh connectivity. In our unstructured mesh redistribution method, a mesh is re-
meshed around the solution features detected. The main objective here is to extract strong solution
features as smooth surfaces based on sensor values and then to create high quality elements around
them. The entire domain can be re-meshed with the embedded surfaces using an advancing front
method with tetrahedra and an advancing layer method with prisms or hexahedra if needed.
Alternatively, elements around the feature surfaces are removed from the initial volume mesh and
only the resulting voids are re-meshed to reduce the required CPU time. Two examples are shown to
present how our approach works.
10.4.2 Feature Detection
After a numerical simulation using an initial mesh, the next step is the detection of solution features.
The location of solution features is indicated by the weight function by [Soni et al.]427 or the shock
sensor by [Lovely and Haimes]428. The weight function is calculated based on the conserved variables
and indicates the regions of important flow features. It is defined at each element as follows:
W1 W2 W3
W=1+ ⨁ ⨁
Max(W1 , W 2 , W 3 ) Max(W1 , W 2 , W 3 ) Max(W1 , W 2 , W 3 )
nq
k
qiξk qiξk
W = ∑ [ i ∕[ ] ]
|q | + ε |qi | + ε
⨁i=1 Max k=1,2,3
Eq. 10.7
where Wk (k = 1, 2, 3), qiξk and qi are x, y and z components of normalized gradient, the kth component
of the gradient calculated using i-th variable and the average variable at the centroid of the element,
respectively. The symbol ⊕ represents the Boolean sum, which for two variables q1 and q2 is defined
as
q1 ⊕ q 2 = q1 + q 2 − q1 q 2
Eq. 10.8
The shock sensor is based on the fact that the normalized Mach number Mn = 1 at a shock.
𝑉 ∇𝑝
M𝑛 = =1
𝑎 |∇𝑝|
Eq. 10.9
where a, V and ⩢ p are the speed of sound, velocity vector and pressure gradient, respectively.
426 Mavriplis, D. J., “Grid Resolution Study of a Drag Prediction Workshop Configuration Using the NSU3D
Unstructured Mesh Solver,” AIAA Paper 2005-4729, 23rd Applied Aerodynamics Conference, Toronto, 2005.
427 Soni, B. K., Koomullil, R., Thompson, D. S. and Thornburg, H., “Solution Adaptive Grid Strategies Based on Point
Redistribution,” Computer Methods in Applied Mechanics and Engineering, Vol. 189, Issue 4, 2000.
428 Lovely, D. and Haimes, R., “Shock Detection from Computational Fluid Dynamics Results,” AIAA Paper 1999-
3285, 14th AIAA Computational Fluid Dynamics Conference, Norfolk, VA, 1999.
242
1 Select nodes of a volume mesh that have a certain range of sensor values.
2 Also select nodes that are one-ring neighbors of the nodes in Step 1 to eliminate noise due to
truncation errors.
3 Number each cluster of selected nodes, which can be defined as their connectivity, if the mesh
has more than one solution features.
4 Calculate distance from the closest boundary at each selected node. The boundary is
represented by the selected nodes that have at least one unselected node as their one-ring
neighbor. The distance is defined as the number of edges from the boundary.
5 The nodes that have local maxima of the distance values are considered to form medial axes.
The coordinates of the nodes in Step 5 are fitted to functions, such as a plane, quadric and cone, using
a least square fitting method. Local mesh size can be considered to be the error range of a data point.
429 Marcum, D. L. and Gaither, K. P., “Solution Adaptive Unstructured Grid Generation Using Pseudo-Pattern
Recognition Techniques,” AIAA Paper 97-1860, 13th AIAA CFD Conference, Snowmass Village, CO, 1997.
430 Sheehy, D. J., Armstrong, C.G. and Robinson, D. J., “Shape Description by Medial Surface Construction,” IEEE
Transactions on Visualization and Computer Graphics, Vol. 2, Issue 1, 1996, pp. 62-72.
243
The reciprocal of the local mesh size is used for weighing. Suppose that a cluster of selected nodes
xmj (j = 1, 2,…, nm) is fitted to a function z = f (x, y).
nm 2
zj − f(xj , yj )
E = ∑( )
lj
j=1
Eq. 10.10
where lj is the maximum edge length connected to node j. E should be minimized. The resulting
function should be trimmed to define a surface in the computational domain.
10.4.3.1 Case Study 1 - NACA0012 Wing-Section
Once the feature surface is computed, the surface mesh generation algorithm is applied to create a
high quality mesh on it. Elements of the initial volume mesh, Vn0, near the solution feature are
removed and the void is re-meshed using the advancing front method. Figure 10.23 (c) shows the
resulting volume mesh (110 K nodes; Vn1). As a result, a high quality redistributed mesh is produced
with alignment to the major flow feature. Figure 10.23 (d) shows another redistributed volume
mesh after the second simulation cycle (130 K nodes; Vn2). Figure 10.24 illustrates hybrid meshes
for the same wing geometry to perform viscous flow simulations and Cp distribution. The shock
location is estimated using the same approach from the initial hybrid mesh, Figure 10.24 (a), and
then the entire domain is re-meshed with the embedded surface (Figure 10.24(b)). To avoid
creating skewed elements around the intersection between the wing upper surface and the
embedded surface, the near-filed mesh around the wing is generated first. The embedded surface
close to or within the near-field mesh is trimmed automatically, and then the rest of the domain is
filled with tetrahedral elements.
Once the feature surface is computed, the surface mesh generation algorithm is applied to create a
high quality mesh on it. Elements of the initial volume mesh, Vn0, near the solution feature are
removed and the void is re-meshed using the advancing front method. Figure 10.23 (c) shows the
resulting volume mesh (110 K nodes; Vn1). As a result, a high quality redistributed mesh is produced
with alignment to the major flow feature. Figure 10.23 (d) shows another redistributed volume
mesh after the second simulation cycle (130 K nodes; Vn2). Figure 10.24 illustrates hybrid meshes
for the same wing geometry to perform viscous flow simulations and Cp distribution. The shock
location is estimated using the same approach from the initial hybrid mesh, Figure 10.24 (a), and
then the entire domain is re-meshed with the embedded surface (Figure 10.24(b)). To avoid
creating skewed elements around the intersection between the wing upper surface and the
embedded surface, the near-filed mesh around the wing is generated first. The embedded surface
close to or within the near-field mesh is trimmed automatically, and then the rest of the domain is
filled with tetrahedral domain is re-meshed with the embedded surface (Figure 10.24(b)). To avoid
creating skewed elements around the intersection between the wing upper surface and the
embedded surface, the near-filed mesh around the wing is generated first. The embedded surface
close to or within the near-field mesh is trimmed automatically, and then the rest of the domain is
filled with tetrahedral elements.
Figure 10.24 Hybrid Meshes for the NACA0012 Wing-Section and Cp Distribution (-1.0 to 1.0)
(a) Initial Hybrid Mesh; (b) Redistributed Hybrid Mesh
considered to represent the most important locations. An approach to obtain medial axes using a
Delaunay triangulation method from the triangulated surfaces can be considered. However, it is
difficult to obtain medial axes automatically as smooth surfaces as discussed in the previous example.
Although the iso-surfaces shown in Figure 10.25 (b) are smoothed using a Laplacian method, many
holes and small features prevent extracting smooth medial axes. The other approach using the least
square fitting method is more appropriate in this case.
After a user specifies one of the template functions, such as a cone, quadratic and quartic, and the z
axis of the function, a corresponding medial axis is obtained as a mathematical function. The bow
shock in front of the capsule is fitted to a quadratic, and the shock from the aft of it is fitted to a cone.
It can be shown shows that the obtained surfaces and the iso-surfaces for reference on the symmetry
plane. The least square fitting method estimates the medial axes well. One of the disadvantages using
unstructured meshes is that flow features diverge quickly. This approach enables us to estimate
missing flow features.
Figure 10.25 (c) shows a redistributed mesh, which has 0.74 M nodes. In this case, the entire mesh
is regenerated because the shape of the outer boundary is changed to remove extra elements. The
initial mesh shown in Figure 10.25 (a) can be used for cases at different angles of attack, but it has
0.89 million nodes. In addition, the elements around the shocks in the far field are coarse. The initial
246
mesh gives a carbuncle phenomenon on the bow shock, while the redistributed mesh gives better
result. One of the solution feature surfaces shown in Figure 10.25 (b) fits the bow shock well. The
most notable advantage of the surface-based mesh redistribution method is that anisotropic non-
simplicial elements can be used around the feature surfaces to avoid creating skewed elements.
Figure 10.25 (d) shows a redistributed hybrid mesh, which has 0.62 M nodes, based on the same
numerical result. Prismatic layers are placed around the bow shock.
247
11 Dynamic Meshing
The moving-mesh provides a capability of tackling flow simulations where the domain shape changes
during the simulation. In such cases, the computational mesh needs to adapt to the time-varying
shape of the domain and preserve its validity and quality. The mesh motion solver support which
calculates the internal point motion based on the prescribed motion of the boundary. The
performance of the method is preserved through the choice of decomposition of cells, the bounded
discretization and the use of iterative solvers431. We covered the dynamic mesh before a little bit
with Adaptive Mesh Refinement (AMR) where it was characterized it as H, P, and R-Methods. To
recap:
➢ H-Method - It involves automatic refinement or coarsening of the spatial mesh based on
a posteriori error estimates or error indicators. The overall method contains two
independent parts, i.e. a solution algorithm and a mesh selection algorithm.
➢ P-Method - the adaptive enrichment of the polynomial order.
➢ R-Method - The R-Method is also known as Moving Mesh Method (MMM). It relocates
grid points in a mesh having a fixed number of nodes in such a way that the nodes remain
concentrated in regions of rapid variation of the solution.
Where most of adaptive refinements is using R-Methods, with key ingredients which includes
Interpolation of time dependent mesh equation432. In the Dynamic Mesh, the computational mesh
is moved to follow the changing shape of the boundary by moving its points in every step of the
transient simulation. The main difficulty in this case is maintaining the mesh validity and quality
without user interaction where the performance will be quantified by speed, accuracy, robustness,
and stability433.
431 Hrvoje Jasak, ˇZeljko Tukovi´c, “Automatic Mesh Motion for the Unstructured Finite Volume Method”, ISSN
1333–1124, UDK 532.5:519.6.
432 Tao Tang, “Moving Mesh Methods for Computational Fluid Dynamics”, Contemporary Mathematics, 1991.
433 Hrvoje Jasak, ˇZeljko Tukovi´c, “Automatic Mesh Motion for the Unstructured Finite Volume Method”, ISSN
with moving boundaries and interfaces”, Computer methods in applied mechanics and engineering 119 (1994).
438 Helenbrook, B. T., “Mesh deformation using the bi-harmonic operator”, International journal for numerical
dynamic re-meshing using a generalized grid interface (GGI)439. The Radial Basis Function (RBF)
method and Delaunay Method which have been used widely in fluid-structure interaction. An
analysis of dynamic-meshing techniques was one by quantifying the accuracy, robustness, stability,
and speed of each one and while dynamic re-meshing via solution of a Laplace equation was robust
and GGI was the fastest, Overset meshing was found to be the most stable and the most general
technique for complex geometries and motions440. RBF proved to be too computationally expensive
and unrealistic for 3D problem.
∂ ∂ ∂ ∂φ
⏟ (ρφ) + (ρuj φ) = (Γφ ∂x ) + S⏟φ
∂t ⏟j
∂x ⏟
∂xj j
Transient ⏟ Source
Convection Diffusion
Transport
Eq. 11.1
Where ϕ is the general scalar quantity, ρ is the fluid density, and u i is the flow velocity vector.
Furthermore, Γ is the diffusion coefficient and S is the source term. After integrating over a Control
Volume and applying the divergence theorem, we obtain the integral form
439 OpenFOAM ®.
440 Hrvoje Jasak, ˇZeljko Tukovi´c, “Automatic Mesh Motion for the Unstructured Finite Volume Method”.
249
V t (ρ ) + .(ρ u j ) − . Γ −
Q x
S dV = 0
j
V t (ρ ) dV + A (ρ u j ). dA = A Γ . dA + V S dV Eq. 11.2
d
(ρ ) dV + ρ (u i − u g ) . dA = Γ . dA + S dV
dt V A A V
Here, ug is the grid velocity of the moving mesh, and A is used to represent the boundary of the control
volume V. The unsteady term (first term) could be written as
d (ρV) n +1 − (ρV) n dV
dt v
ρ dV = → V n +1 = V n + Δt Eq.
Δt dt
11.3
Where dV/dt is the volume time derivative of the control volume. In order to satisfy the grid
conservation law, the volume time derivative of the control volume is computed from
Face Face δV
dV
= u g .dA = u gj.A j =
j
Eq. 11.4
dt V j j Δt
With δVj is the volume swept out by the control volume face j over the time step Δt. In the case of the
sliding mesh, the motion of moving zones is tracked relative to the stationary frame. Therefore, no
moving reference frames are attached to the computational domain, simplifying the flux transfers
across the interfaces441. In the sliding mesh formulation, the control volume remains constant,
therefore, dV/dt = 0 and Vn+1 = Vn
d
(ρ ) n +1 − (ρ ) n V
dt v
ρ dV = Eq. 11.5
Δt
quite robust even with the complex geometry and high amplitude mesh motion. The mathematical
representation for mesh motion via solution of a Laplace equation is
1
∇ . (k ∇𝐮i,mesh ) = 0 , k=
𝑙2
Eq. 11.6
Where ui,mesh is the velocity of points in the mesh and k is a distance function that minimizes the mesh
distortion, and l is the distance to the moving boundary. The body is rotated or transformed in some
manner described by the user and the points on the body are moved based on a coordinate
transformation. The points surrounding the body are moved based on the Laplace equation above.
There is a modest amount of error introduced before the cells surrounding the body are moved443.
11.4.2 Pseudo-Solid Equation
While the Laplace equation only allows direction-decoupled transfinite mapping, the pseudo-solid
equation also allows rotation. However, this comes at a relatively high price: the pseudo-solid
equation couples the components of the motion vector due to rotation. The choice here is either an
increase in storage associated with the block solution of all displacement components or an iterative
segregated solution method.
443 Gina M. Casadei, “Dynamic-Mesh Techniques for Unsteady Multiphase Surface-Ship Hydrodynamics”, A Thesis
in Mechanical Engineering, Pennsylvania State University, December 2010.
444 Hrvoje Jasak, ˇZeljko Tukovi´c, “Automatic Mesh Motion for the Unstructured Finite Volume Method”, ISSN
Conference on Numerical Grid Generation in Computational Field Simulations, Whistler, British Columbia,
Canada, 2000.
446 Helenbrook, B. T., Mesh deformation using the biharmonic operator, International journal for numerical
447 Tukovi´c, ˇZ. “Finite volume method on domains of varying shape (in Croatian)”, Ph.D. thesis, Faculty of
mechanical engineering and naval architecture, University of Zagreb, 2005.
448 Tysell, L., Grid Deformation of 3D Hybrid Grids. Proceedings of the 8th International Conference on Numerical
Grid Generation in Computational Field Simulations, pp. 265-274, International Society of Grid Generation
(ISGG), Honolulu, Hawaii, USA, 2002.
449 Helenbrook, B., Mesh Deformation Using the Biharmonic Operator. International Journal for Numerical
The biharmonic surface grid projection algorithm used for h-refinement may also be used for the
generation of the initial surface grid. This algorithm is better to handle surface patches with poor
parameterization and internal surface discontinuities than bicubic splines. The algorithm has later
been modified in [Tysell]450. The latest improvements are the use of more edge swapping in order to
get a more regular mixed grid and also the setting of a fix position of some nodes close to or on the
curves defining the surface patch. The initial position of the nodes in the mixed grid can be computed
using a tensor-product patch definition. The algorithm is then used to adjust the position of the nodes
in order to get a smooth surface grid.
11.4.4 Radial Basis Function451
A common problem in CFD is maintaining high mesh quality during large transformations and
rotations, as shown in the Laplace equation method as described before. One mesh technique that
can handle large mesh deformations is based on the interpolation of Radial Basis Functions (RBF).
This technique can offer superior mesh motion in terms of mesh quality on average but can be
computationally expensive. It is critical when using RBF that the mesh quality remains high. If the
worst mesh quality is too low, the simulation will diverge. However, if the mesh quality remains high,
the simulation will remain stable, accurate and efficient. Bos 452 studied the wing performance for
flapping wings of insects at small scales. The RBF method can handle this motion by interpolating the
displaced boundary nodes on the surrounding mesh. Bos also studied the difference between using
the Laplace equation with variable diffusivity, solid body rotation stress equation and RBF. The
skewness and non-orthogonality values were compared for all cases and the RBF showed higher
mesh quality for both skewness and non-orthogonality.
450 Tysell, L., CAD Geometry Import for Grid Generation. Proceedings of the 11th ISGG Conference on Numerical
Grid Generation, International Society of Grid Generation (ISGG), Montreal, Canada, 2009.
451 Gina M. Casadei, “Dynamic-Mesh Techniques for Unsteady Multiphase Surface-Ship Hydrodynamics”, A Thesis
Figure 11.4-(b) clearly displays that the RBF deforms around the rotating rectangle, unlike the
Laplacian mesh motion (Figure 11.4-(a)) which has highly skewed cells around the rectangle. The
high mesh quality is more preserved in regards to RBF. However, RBF requires much more
computational effort between iterations during the mesh update scheme, which is a huge downfall to
this method. The interpolation function s(x) as defined below describes the displacement of all
computational mesh points by summing a set of basis functions:
( )
Nb
s(x) = γ j x − x b j + q(x)
j=1 Eq. 11.8
where x b j = [ x b j , y b j , z b j ] are boundary value displacemen t
453Ismail Bello, Shahrokh Shahpar, “Mesh Morphing For Turbomachinery Applications Using Radial Basis
Functions”, Rolls-Royce PLC CFD Methods Group, Derby, UK.
254
surface, and it was found that variations in the geometry were introduced by various external factors.
To understand the differences introduced by these variations on the dynamics of the flow around the
OGVs, an accurate representation of the manufactured assembly must be generated and meshed in
order to perform new CFD simulations. This is critical in order to understand local effects introduced
by these variations, as well as
system-level effects such as
changes in the gas turbine
performance. These variations
are not easily represented by
changes in the parametric
representation of the blades,
and as such form an ideal
candidate for our test case.
For each blade, the
displacement field required to
morph the mesh generated
from the parametric
representation can be
computed by calculating the
signed shortest distance field
between the STL scan and the
mesh, this was done using in-
house code built for this
purpose called Point2Surface
(P2S)454. P2S computes the Figure 11.5 Outlet Guide Vane (OGV) Boundary Surfaces Defined for
Harsdorf distance between the a Single Passage
two surfaces and its associated
direction. This distance computation is only required on the outlet guide vane surface, the inner
boundary surface in , the inner boundary surface in Figure 11.5. For all other surfaces, we require
these to be fixed. In other words, the morphing process is purely needed to adapt the volume mesh
local to the OGV surface naturally and smoothly. A number of challenges were encountered in this
application, but practical solutions were found which allowed the use of the method to produce valid
meshes for all blades (see [Bellob and Shahpar]455.
Local features in the underlying displacement field presented an issue in avoiding negative volumes.
Three kinds of features were found to affect the validity of the result. High displacements applied to
cells with high curvature area, particularly near the leading and trailing edge. This is currently being
investigated as an improvement to the sampling routines and distance computations. Because the
Harsdorf distance is not necessarily normal to the original surface, angular variations are found
where certain features are present on the surfaces (see Figure 11.6). Where locally the distance
field deviated largely with the surface normal, the morphing process was found to be prone to
inversions, i.e negative volume cells. These variations were usually due to local features in the
scanned component such as surface imperfections and fillet radii. In order to avoid this, an additional
filtration of the input data was done to exclude such measurements from the sampled data. In other
words, given a threshold angle, all distance vectors that deviate from the surface normal by greater
than that value are excluded from the morphing constraints in order to not capture such features.
Figure 11.6 Typical Angular Variation Between the Computed Distance Field Vector and the Surface
Normal (OGV not shown to scale)
The GGI design rationale with respect to examples like Turbomachinery is:
• Apart from “fully overlapped” cases, turbomachinery meshes contain similar features that
should employ identical methodology, but are not quite the same.
• Non-matching cylices for a single rotor passage.
• Partial overlap for different rotor-stator pitch.
• Mixing plane to perform averaging instead of coupling directly.
458 Hrvoje Jasak, “General Grid Interface Theoretical Basis and Implementation”, Wikki Ltd, United Kingdom.
257
459 S.E. Sherer and J.N. Scott. “High-order compact finite-difference methods on general overset grids”. Journal of
Computational Physics, 210(2):459–496, 2005.
460 P.M. Carrica, R.V. Wilson, R.W. Noack, and F. Stern, “Ship motions using single phase level set with dynamic
Sj Vj
ej = j = 1,2,3 ej = j = 1,2,3,4 Eq. 11.9
S V
The key step in Delaunay method is moving the Delaunay triangular base on boundary deformation.
All the connectivity and vertex index should be kept. If the deformation is too large, triangular
deformation may failure. In this case, split the deformation into two shell steps and go back to step 1
regenerating the Delaunay triangular. At last, relocate the spatial node. According to the surface or
volume coordinate at step2, turn them into Cartesian at the moved triangular. As equation :
4
xp = ei xi Eq. 11.10
i =1
An interesting discussion regarding the Delaunay Graphed Method with Damping Function
(DGMDF), for fluid-structure integration, is provided by [Wang et al.]462 which accordingly works
with efficiency that requires.
11.4.7.1 Case Study - Airfoil Rotation
In this case, NACA 0012 airfoil will rotate about its back edge 30 degrees. All the fluid uses a structure
mesh with lowest quality 0.7. The surface girds are quad with 200 nodes and spatial grids are
hexahedron with 40325 nodes. Figure 11.9-(a) demonstrates the mesh before deformation. For
Delaunay method, first step is generating Delaunay triangular according to geometry boundary as
462Yibin Wang, Ning Qin , Ning Zhao, “Delaunay graph-based moving mesh method with damping functions”,
Chinese Journal of Aeronautics, August 2018.
259
Figure 11.9-(b) displayed. Then, compute the surface coordinates of spatial nodes in Delaunay
triangular. After the geometry deformation, the Delaunay triangular will deform as Figure 11.9-(c).
By keeping the surface coordinates unchanged, we relocate the spatial nodes in Cartesian
coordinates. Figure 11.9-(d) show the mesh after deformation463.
11.4.8 Spring Analogy
Other technique is the spring analogy, where the mesh nodes are connected through tension springs,
where the stiffness is related to the length of the edge. This approach tends to produce highly
deformed meshes with collapsed or negative volume and is incapable of reproducing solid body
rotation. The tension spring model has been improved by attaching torsion springs to each vertex
where the stiffness is related to the angle.
11.4.9 Six Degrees of Ferndom (6
DOF)
11.4.9.1 Transitional Deformation
The grid was deformed in the y-and z-
directions using a spring analogy
technique. The 6DOF solver uses the
object's forces and moments in order
to compute the translational and
angular motion of the center of gravity
of an object. The governing equation
for the translational motion of the
center of gravity is solved for in the
inertial coordinate system. As an
example, Figure 11.10 shows the
mesh after the translational
deformation for a wing. The original
un-deformed mesh is shown in grey
color, and the deformed mesh is Figure 11.10 Mesh Before and After the Translational
shown in red. In this case the tip Deformations
deformation along the y-axis is 20% of
the wing semi-span464.
11.4.9.2 Rotational Deformation
The rotational grid can be obtained by multiplying the original grid with the matrix on R as:
Cθ C ψ C θ Sψ - Sθ
R = S SθCψ - C Sψ S SθSψ + C Cψ S Cθ Eq. 11.11
C Sθ C ψ + S Sψ C SθSψ − S C ψ C Cθ
where, in generic terms, CX = Cos(X) and SX = Sin (X) . The angles φ , ϴ, and ψ are Euler angles that
463JIA Huana, SUN Qin b, “A Comparison of Two Dynamic Mesh Methods in Fluid –Structure interaction”, School
of Aeronautics, Northwestern Polytechnic University, Xi‘an china. 2nd International Conference on Electronic &
Mechanical Engineering and Information Technology (EMEIT-2012).
464 Joaquin Ivan Gargoloff, “A Numerical Method For Fully Nonlinear Aero-elastic Analysis”, Dissertation,
Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements
for the degree of Doctor of Philosophy, 2007.
260
Undeformed Deformed
Figure 11.11 Mesh Before and After the x-axis Rotational Deformation
Where τ is an artificial time. Then, beginning with an initial guess, we march in “time” to steady state.
Any discrete marching scheme to solve (Eq. 11.12) can be regarded as an iterative method to solve
the nonlinear system (Eq. 11.6). At τ = 0, we can find the solution up to steady state to obtain a
mesh that adapts well to the initial data. With this initial adaptive mesh, the solution u can be updated
(using the underlying PDE) one time step. Then a new mesh is obtained using the updated u in the
monitor function. However, since u changes only very little in one time step, it is not necessary to
solve again (Eq. 11.12) all the way to steady state. Besides, the initial mesh is already a very good
initial guess. Thus, it is natural to march only one time step in (Eq. 11.12) (or equivalently to do only
one iteration) at a time. In other words, taking τ as the actual time. Therefore, we proceed solving the
moving mesh and the underlying PDEs alternately one time step at a time468.
11.5.1 Case Study - Dynamically Adaptive Mesh Refinement FDTD: A Stable and Efficient
Technique for Time-Domain Simulations469
The Finite-Difference Time-Domain (FDTD) technique has been extensively employed in the modeling of
microwave and optical structures, due to its simplicity and versatility. However, these FDTD qualities are
partially compensated by the stability and numerical dispersion limitations on the choice of the cell size
and the time step of the method, that render its application to complex and/or electrically large structures
computationally expensive. In the past, a variety of static sub-gridding techniques have been proposed,
aimed at accelerating the conventional FDTD technique for structures with localized fine geometric
features. According to such approaches, local mesh refinement is pursued in a priori defined regions of a
computational domain, as dictated by physical considerations.
For example, the presence of metallic edges, or high dielectric permittivity inclusions, would call for a
locally dense mesh, embedded in a coarser global one. The use of local mesh refinement typically results
in significant computational savings compared to the conventional FDTD, despite the fact that its
implementation is associated with additional interpolation and extrapolation operations in both space and
time. However, static mesh refinement ignores the dynamic nature of time-domain field simulations. In
fact, techniques such as FDTD and the transmission line matrix (TLM) essentially register the evolution of
a broadband pulse propagating in a device under test, along with its retro-reflections. Hence, a localized
discontinuity in a simulated domain is only illuminated for a (potentially small) fraction of the total
simulation time, during which a local mesh refinement around it is needed. Therefore, static mesh
refinement, which is widely employed in frequency-domain simulations and has been incorporated in
commercial finite-element tools, is only a sub-optimal solution to the mesh refinement problem in the
framework of time-domain analysis.
Recently, the AMR technique was coupled with FDTD to produce a dynamically mesh adaptive FDTD
algorithm, that was successfully applied to microwave integrated circuit and optical waveguide problems.
Instead of applying local mesh refinement in a priori defined regions of a computational domain, the
dynamic AMR-FDTD uses sub-grids, which are adaptively defined according to the spatial-temporal
evolution of field distributions. As a result, significant execution time savings, up to two orders of
magnitude, are attainable for large-scale open-domain problems. In this paper, the dynamic AMR-FDTD
approach is explained and realistic applications, demonstrating the salient features of the method are
provided.
468 Hector D. Ceniceros and Thomas Y. Hou, “An Efficient Dynamically Adaptive Mesh for Potentially Singular
Solutions”, Journal of Computational Physics 172, 609–639 (2001).
469 Yaxun Liu and Costas D. Sarris, “Dynamically Adaptive Mesh Refinement FDTD: A Stable And Efficient
Technique For Time-Domain Simulations”, Department of Electrical and Computer Engineering University of
Toronto, Toronto, ON, M5S 3G4, Canada.
262
of AMR-FDTD child meshes converges to zero over time, implying that only the root mesh is still
present at a late stage of the code. Therefore, no spatial or temporal interpolation operations, which
are the primary sources of instabilities in adaptive mesh FDTD codes, are applied then. This is an
additional advantage of using a dynamically adaptive instead of a statically adaptive mesh in time-
domain simulations. For further info, please refer to development in [Liu and Sarris]470.
470Yaxun Liu and Costas D. Sarris, “Dynamically Adaptive Mesh Refinement FDTD: A Stable And Efficient
Technique For Time-Domain Simulations”, Department of Electrical and Computer Engineering University of
Toronto, Toronto, ON, M5S 3G4, Canada.
264
471 Sideroff, C,. “Multi-Block Structured Meshing and Pre-Processing for OpenFOAM Turbomachinery Analysis”.
472 Stephen Ferguson, CD-Adapco Blog, “Nature’s Answer to Meshing”, 2013.
473 Emre Sozer, Christoph Brehm and Cetin C. Kiris, “Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
266
are also less sensitive to stretching than tetrahedral. Smart grid generation and optimization
techniques offer limitless possibilities: cells can automatically be joined, split, or modified by
introducing additional points, edges and faces. Indeed, substantial improvements in grid quality are
expected in the future, benefiting
both solver efficiency and accuracy
of solutions.
Polyhedral cells are especially
beneficial for handling recirculating
flows. Tests have shown that, for
example, in the cubic lid-driven
cavity flow, many fewer polyhedral
are needed to achieve a specified
accuracy than even Cartesian
hexahedra (which one would expect
to be optimal for rectangular
solution domains). In fact for a
hexahedral cell, there are three
optimal flow directions which lead
to the maximum accuracy (normal
to each of the three sets of parallel
faces); for a polyhedron with 12
faces, there are six optimal
directions which, together with the
larger number of neighbors, leads to
a more accurate solution with a
lower cell count.
Although tetrahedral are the
simplest form of volume elements Figure 12.4 Polyhedral cells vs Tetrahedral cells
and tetrahedral meshes are able to
approximate any arbitrarily shaped
continuum with a remarkable level of detail. Automated tetrahedral mesh generation methods have
been well studied and developed, providing currently the only robust solution for meshing complex
geometries in 3D, making them a standard choice of major CFD codes. However, despite the fact that
tetrahedral present several geometric assets, such as planar faces and well defined face and volume
centroids, they suffer from certain disadvantages that make analysts deem them inferior to
hexahedra. Tetrahedral elements cannot provide reasonable accuracy, as soon as they become too
elongated, which is often the case in boundary layers or sharp corners of the domain. Furthermore,
they have only four neighbors making them not an optimal choice for CFD, as computation of
gradients at cell centers can become problematic. It is, therefore, not unusual during simulations
serious numerical stability issues to appear, additionally to the reduced accuracy, and problematic
convergence properties to dominate the analysis. Figure 12.4 indicate the pro and con of Polyhedral
cells vs Tetrahedral ones.
12.1.5.1 Boundary Prismatic Cells
An issue for general unstructured cells, are boundary cells. Several remedies exist in order to
overcome those disadvantages. A boundary layer, formed using prismatic elements along walls, is
able to balance, up to a certain degree, the negative effects in accuracy and stability (see Figure 12.5-
left). Furthermore, advanced discretization methods combined with very fine meshes can result to
accurate solutions and good convergence properties. This, however, demands for increased memory
usage and computing time, while it makes the analysis code more complicated. Recently, an
267
alternative option to tetrahedral meshes has emerged, suggesting the use of polyhedral elements
instead474. Polyhedral over the same level of automatic mesh generation as tetrahedral do, while they
are able to overcome the disadvantages adherent to tetrahedral meshes (see Figure 12.5-right). A
major advantage of polyhedral occurs from the fact that they are bounded by many neighbors,
making approximation of gradients much better that tetrahedral. Furthermore, they are much less
sensitive to stretching and, since their typically irregular shape is not a restriction for several CFD
codes, they over the possibility of post-processing and optimization without the strict geometric
criteria that are necessary for optimizing tetrahedral, or even hexahedral meshes.
On the negative side, polyhedral are usually of much more complex geometry than regular volumes,
and, depending on the generation method, it cannot always be guaranteed that they are convex, or,
even more, that their faces are planar. The topology of polyhedral meshes is, typically, also complex,
preventing the implementation of efficient and easy to maintain generation algorithms from being
straightforward. As a further consequence, polyhedral meshes require a considerable amount of
adjacency relations, in comparison to tetrahedral and hexahedral meshes, making them candidates
for resource expensive solutions. All the above set the basis for an interesting field of exploration in
volume meshing. Previous studies on the subject have shown promising results, however polyhedral
meshing is still far from becoming a standard practice in CFD simulations. Some explanations for this
may be its limited adoption from analysis codes and the fact that polyhedral are not an appropriate
solution for every type of analysis. It should be mentioned that, currently, polyhedral meshes attract
more attention in fields such as Computer Graphics and Medical Imaging, where in 3D volume
rendering is of specific interest. However, the few researches dedicated to exploring polyhedral mesh
generation for CFD remain active, making constant progress towards more efficient methods and
high quality meshes.
Figure 12.5 Boundary prims cells for tetrahedral (left) and polyhedral (right) cells – (Courtesy of
CD-Adapco)
474M. Peric, “Simulation of flows in complex geometries: New meshing and solution methods”, NAFEMS seminar:
Simulation of Complex Flows (CFD).
268
475 Emre Sozer, Christoph Brehm and Cetin C. Kiris,”Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
476 Diskin, B., Thomas, J., Nielsen, E., Nishiwaka, H., and White, J., “Comparison of Node-Centered and Cell-
Centered Unstructured Finite-Volume Discretization: Viscous Fluxes,” AIAA Journal, Vol. 48, No. 7, 2010.
477 Diskin, B. and Thomas, J. ,“Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume
Discretization: Inviscid Fluxes,” AIAA Journal, Vol. 49, No. 4, 2011, DOI: 10.2514/1.J050897.
478 Emre Sozer, Christoph Brehm and Cetin C. Kiris, “Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
479 Aftosmis, M., Gaitonde, D., and Tavares, T., “Behavior of Linear Reconstruction Techniques on Unstructured
Meshes,” AIAA J., vol. 33, no. 11, pp. 2038-2049, 1995.
480 Mavriplis, D., “Revisiting the Least-Squares Procedure for Gradient Reconstruction on Unstructured Meshes,”,
Arbitrary Polyhedra”, 48th AIAA Aerospace Sciences Meeting, Jan 4-7, Orlando, FL, 2010.
482 Shima, E., Kitamura, K., and Haga, T., “Green-Gauss/Weighted-Least-Squares Hybrid Gradient Reconstruction
for Arbitrary Polyhedra Unstructured Grids,” AIAA Journal, Vol. 51, No. 11, 2013, DOI: 10.2514/1.J052095.
269
et al.]483. They focus on cost and performance in volume rendering with respect to mesh resolution,
element shapes, neighborhood size and scalar field complexity. They find the inverse weighted
regression method to provide the highest accuracy for irregular meshes and the Green-Gauss method
to perform poorly for badly shaped elements.
12.2.3 Gradient Calculation
The difficulty in calculating gradients in an unstructured mesh stems from the lack of a consistent
and inherent connectivity. The stencil for gradient calculation, as well as the corresponding
coefficients vary cell-by-cell and are costly to compute. Hence, those are typically pre-computed and
stored. Two of the most common methods for gradient calculation are the Green-Gauss and the Least
Squares approaches. Both have several common variations, some of which are explained in the
following sections.
12.2.3.1 Green-Gauss Gradient Method
The Green-Gauss method represents an intuitive, sound basis for gradient calculation. According to
the Green-Gauss theorem, average gradient of a scalar φ in a closed volume V can be obtained by
1
dV = n̂ dA ,
V A
=
V
A
n̂ dA Eq. 12.1
Where ň is the surface unit normal vector and A is the surface area. For a 2nd order scheme with
midpoint quadrature, the Green-Gauss method takes on the following discrete form for a polyhedral:
1 N faces
= f n̂ f Af
V f =1
Eq. 12.2
483Correa, C., R., H., and K., M., “A Comparison of Gradient Estimation Methods for Volume Rendering on
Unstructured Meshes,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 3, March, 2011.
270
Detailed information regarding these methods and more are available in 484.
N
i
d
i =1
2
f = N
i
Eq. 12.4
1
d
i =1
2
i
484Emre Sozer, Christoph Brehm and Cetin C. Kiris, “Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
271
(a) LSQR
(b) GG Simple
surprising as both methods are linear-exact and both utilize compact stencils. Note that the
curvilinear gradient operator has the most compact stencil of the alternative methods in scope here,
utilizing at most 4 points (for 2D).
12.2.5 Results Based on L2 Norm
While order of accuracy is a crucial property to inspect, it is pertinent to look at the actual error levels
as several of the gradient operator choices were demonstrated to satisfy 1st order accuracy. Figure
12.9 shows L2 error norms with
respect to the refinement level. L2 Norm (vertical) vs Refinement level
First we would like to clear the
peculiar behavior of the Green-
Gauss method with simple and
IDW face averaging. It seem to
approach a 1st order convergence Global L2 norm
rate before stalling at a fixed error for x-direction
level. This is due to the vs various
aforementioned inconsistency as gradient
method
they converge, in a 1st order
manner, to a gradient value that is
not consistent with the exact
value. Note here that without a
deep enough convergence study,
this issue could have been
overlooked, leading to a false
conclusion that these methods are
1st order accurate. The rest of the
operators are all linear-exact, and Figure 12.9 Global Error Norms for x-Direction Gradient for
consequently they all consistently Various Gradient Methods
exhibit 1 order accuracy as it was
st
apparent from the GOA distributions shown earlier on the test mesh. The error norms shown in
Figure 12.9 now reveal that the Green-Gauss methods (with WTLI or LSQR face averaging) yield
significantly larger errors compared to the curvilinear or the LSQR methods. Within the latter group,
the LSQR compact has slightly lower error then the LSQR extended while the curvilinear method
places in between.
It is possible to inspect errors for individual cells of various cell types. For regular cell types (square,
equilateral triangle and right triangle), all the gradient operators are able to produce at least 1st order
accuracy. In fact, the square cell type stencil yields 2nd order accuracy for each method. The
curvilinear method produces a notably smaller error for this case. For irregular stencils, which are
of greater practical interest, we start observing the familiar result of convergence stalling for the
inconsistent schemes, namely the Green-Gauss method with simple or IDW face averaging.
Discarding the special case of the square stencil, the LSQR gradient operator consistently produces
the lowest errors except for the cases of thin triangles and thin quadrilaterals (commonly
encountered in boundary layer regions of CFD meshes). For the thin cells, the trend reverses and the
LSQR method yields the largest errors while the consistent Green-Gauss methods perform the best.
Note that the thin cells mentioned here were sampled near the curved boundary region of the test
mesh. Whereas the Green-Gauss method exhibited mediocre performance elsewhere, its favorable
behavior in the crucial boundary layer type meshes demonstrates its appeal.
The errors associated with the curvilinear method were erratic, yielding the best result for the square
cells and placing among the lower error range elsewhere with two exceptions; the thin quadrilateral
and the arbitrary polyhedral where it exhibited the largest errors. This suggests that a smarter logic
273
for stencil reduction (in 2D, down-selection of 4 stencil points) needs to be developed. Otherwise, we
consider this method promising, considering that it has the most compact stencil, hence the lowest
computational cost.
12.2.6 Concluding Remarks
A detailed accuracy study of gradient calculation methods for cell-centered unstructured data is
presented. Necessity of the linear-exactness property for 1st order gradient accuracy, and
consequently a 2nd order scheme, is emphasized. A straightforward, yet novel, approach utilizing
local curvilinear transformation is proposed. The curvilinear method offers the most compact
gradient stencil among those studied here. No clear “best” method emerged but strengths and
shortcomings of the investigated methodologies for different cell types are exposed. Gradient
operators with compact stencils, namely LSQR compact and curvilinear, generally exhibited lower
errors. LSQR compact scheme caused stability issues for the solution of the inviscid standing vortex
problem on the random triangulated mesh. The curvilinear scheme, on the other hand, had an erratic
behavior for different cell types, yielding overall low error levels but exhibiting a large error for a
sample arbitrary polyhedral cell. This suggests that the method could benefit from development of a
smarter stencil reduction logic (to down-select 4 points from the available stencil in 2D). The Green-
Gauss method stood out with lower errors for thin triangular or quadrilateral cell types, such as those
found in typical boundary layer meshes, which is it is a very attractive quality for CFD solvers.
Figure 12.11 Notional Launch Vehicle was Imported from an IGES file – (Courtesy of Pointwise Inc.)
Figure 12.10 Broken Rules are Dsplayed on-screen as a Guide to Repair – (Courtesy of Pointwise Inc.)
275
The results are rapid generation of high density, boundary layer refined meshes as illustrated in
Figure 12.12.
276
Figure 12.12 T-Rex Mesh Generates Near-Wall Hex Layers for Boundary Layer Resolution and
Transitions to an Isotropic Tetrahedral Mesh in the Far Field – (Courtesy of Pointwise Inc.)
486 Steve Karman & Nick Wyman, “Automatic Unstructured Mesh Generation with Geometry Attribution”, AIAA
Sci tech 2019 Forum, Pointwise, Inc.
487 Michael Turner, “High-Order Mesh Generation For CFD Solvers”, Imperial College London, Faculty of
Engineering Department of Aeronautics, A thesis submitted for the degree of Doctor of Philosophy, 2017.
277
order meshing created around a circular geometry488. It was argued, that The classification of
curvilinear meshing techniques has traditionally been separated into direct and indirect methods489.
The direct approach seeks to build a curvilinear high-order mesh directly from the CAD boundary
representation whereas the indirect approach seeks to elevate and untangle a linear mesh generated
using pre-existing meshing and smoothing technologies. In general, creating valid elements of
arbitrary order directly from CAD is very computationally intensive, so even the direct methods often
generate and elevate linear elements. Regardless of which approach is used, every curvilinear
meshing technique requires a way of identifying and correcting invalid elements. The current
curvilinear meshing techniques can be loosely categorized into two main groups based on the
method of correcting invalid elements: local mesh modification and energy models.
The current state of the art in high-order mesh generation does not provide a reliable and efficient
approach which would be required in an industrial setting. The aims here is to create high-order
meshes directly from CAD as automatically and robustly as possible. This means that all parts of the
high-order meshing problem must be addressed including CAD handling and linear mesh generation.
It has been shown to be able to apply several high-order mesh generation methods found throughout
the literature and unify them in one context. In addition to this the algorithms used within this
framework mitigate a significant amount of the high computational cost associated with high-order
mesh generation and attempts to address robustness issues.
In the case of inviscid meshes, where the wall spacing is large compared to the boundary
deformation, it is possible to curve the boundary without inverting the element or causing severe
degradation to element quality. However, for viscous meshes, where the wall spacing is much smaller
compared to the boundary deformation, simply deforming the boundary edge can result in the edge
inverting the boundary element and crossing over several layers of interior elements, as shown in
Figure 12.13-a. In this situation, curvature must be applied to at least the layers that were crossed,
but possibly to additional layers in order to increase element quality throughout the mesh. Figure
12.13-b shows the minimal amount of curving required to produce elements with positive areas,
however the third layer still contains nearly inverted elements. Propagating the curvature further
into the interior, as shown in Figure 12.13-c, produces much better quality elements throughout
the entire mesh. The challenge for high-order curvilinear meshing, therefore, is to adequately
represent increasingly complex geometries while also maintaining element validity and maximizing
488 Kristen Catherine Karman, “Higher Order Mesh Curving Using Geometry Curvature Extrapolation”, A
Dissertation Submitted to the Faculty of the University of Tennessee at Chattanooga in Partial Fulfillment of the
Requirements of the Degree of Doctor of Philosophy in Computational Engineering, Dec, 2017.
489 Blum, H. A transformation for extracting new descriptors of shape. Models for the Perception of Speech and
n n
x − xj
pn (x) = ∑ Li (x)yi ; Li (x) = ∏
xi − xj
i=0 j≠i j=0
Eq. 12.6
Linear Non-Linear
490 Kristen Catherine Karman, “Higher Order Mesh Curving Using Geometry Curvature Extrapolation”, A
Dissertation Submitted to the Faculty of the University of Tennessee at Chattanooga in Partial Fulfillment of the
Requirements of the Degree of Doctor of Philosophy in Computational Engineering, Dec, 2017.
279
where Li (x) has the property Li(xj ) = δij. Lagrange polynomials only interpolate the points themselves
and since the initial edge has no interior points, this produces a polynomial of at most degree 1, which
gives us back the linear edge, as shown in Figure 12.15-a.
12.4.1.2 Hermite Interpolating Polynomials
Hermite polynomials interpolate both the coordinates and the derivatives at the control points.
Given an edge with two endpoints p0 and p1 and corresponding tangents t0 and t1, the Hermite
interpolation is defined by 491
p(u) = (2u3 − 3u2 + 1)p0 + (2u3 + 3u2 )p1 + (u3 − 2u2 + u)t 0 − (u3 − u2 )t1
Eq. 12.7
It is important to note that the general form of the Hermite interpolating polynomial does not require
the tangent vectors be normalized. Therefore it is possible to determine the appropriate lengths
which would also interpolate the Gaussian curvature at the end points, however this technique was
not pursued for this study. Figure 12.15-b shows the visualization of the resulting Hermite
interpolating curve for the example edge.
12.4.1.3 Bézier Curves
Whereas the Lagrange and Hermite interpolating polynomials interpolate a given set of points and
conditions, an alternative approach is to define a curve that can be easily manipulated which
approximates a given set of points. Bézier started with the principle that any point on a curve
segment must be given by a parametric function of the form
Garanzha, V. A., and Kaporin, I. E. Regularization of the variation method of grid generation. Computational
491
p(u) = ∑ p𝑖 f𝑖 (u)
i=0
Eq. 12.8
where u is the parametric coordinate with the restriction u2 [0,1] and pi represent the n + 1 vertices
of a characteristic polygon (also called control points). He also set forth the following properties for
the blending functions fi(u):
➢ The functions must interpolate the first and last vertex points.
➢ The tangent at p0 must be given by p1 - p0 and the tangent at pn must be given by pn - pn-1. This
allows for direct control of the tangent.
➢ The previous requirement was also generalized for higher orders, namely that the r-th
derivative at an endpoint must be determined by its r neighboring vertices. This allows for
control of the continuity at joints between segments of a composite Bézier curve.
➢ The functions fi(u) must be symmetric with respect to u and (1 - u). This allows for reversing
the sequence of vertex points without altering the shape of the curve.
Bézier chose a family of functions known as the Bernstein polynomials to fulfill these properties.
Figure 12.15-c shows the visualization of the four point Bézier curve as derived.
If the vertex locations of the linear surface mesh are taken to be fixed, producing a high-order surface
can be accomplished simply by using an affine mapping of the triangle in the 2D parameter plane to
the reference triangle of a high-order element. This can then be used to locate the new high-order
nodes in the parameter space, which are then projected into 3D using the CAD engine. However, this
means that the high-order triangles will inherit the distortion of the CAD surface, lowering the quality
of the mesh and in some cases causing invalid elements. The rest of this section presents a method to
take the high-order surface mesh made using the affine mapping approach and optimize the location
of the high-order nodes to reduce CAD induced distortion. This is done by modelling the mesh entities
as spring networks and minimizing the spring energy, in a similar approach to the work of 492. In
mathematical notation, this can be expressed as
‖x1 − x2 ‖2
Min f = ∑
w𝑆
𝑆
Eq. 12.9
492Sherwin, S. J., and Peir_o, J. Mesh generation in curvilinear domains using high-order elements. International
Journal for Numerical Methods in Engineering 53, 1 (2002).
281
which states that f, the spring energy, is the sum over all the springs in the system, where x1 and x2
are the 3D locations of the nodes at the ends of the springs and ws is the inverse of the spring stiffness,
which is calculated as a function of the nodal distribution being targeted. Because the linear mesh
vertices are held fixed during this procedure, the problem can be reduced to an entity-by-entity
approach.
First optimizing mesh edges that lie on curves; then edges that lie on surfaces; and finally interior
triangle faces that lie on CAD surfaces. In the first case (edges on CAD curves), the problem is a 1D
optimization of spring system in the curve's parameter space t.
p
‖x(t i+1 ) − x(t i )‖2
f=∑
wi
i=1
Eq. 12.10
Where i is one of the P + 1 nodes along the high-order edge. Here, P is the polynomial order of the
mesh being created and wi = zi+1 - zi, where zi is the i-th entry in the distribution of nodal points in the
where -1 ≤ z ≤ 1. The initial values of t are obtained from the linear 1D mapping ti = t1
1 − 𝑧𝑖 1 + 𝑧𝑖
t 𝑖 = t1 ( ) + t 𝑝+1 ( ) , i = 1, , , , , , , p + 1
2 2
Eq. 12.11
where t1 and tP+1 are the parametric coordinates of the end nodes of the edge, which are the vertices
in the linear mesh and are considered to be fixed. Performing the optimization of the edges which lie
on the CAD surfaces follows Eq. 12.9. High-order surface generation 493 a similar procedure but is
formulated in the 2D parameter plane, i.e.,
p
‖x(ui+1 , vi+1 ) − x(ui , vi )‖2
f=∑
wi
i=1
Eq. 12.12
This procedure reduces the distortion found in the high-order edges by minimizing the length of the
edge; that is, the optimized high-order edge will lie approximately on the geodesic between the two
end points on the surface. The procedure for optimizing the location of face interior nodes requires
a slightly alternative approach. The system is considered as a set of freely movable nodes, consisting
of those nodes lying on the interior of the triangle, and a set of fixed nodes which lie on the edges.
Each of the free nodes is connected to a system of six surrounding nodes by springs, and this is the
system which is minimized. In a triangle of order P, there are (P-2)(P-1)/2 interior nodes. The
function f is
(p−2)(p−1)/2 6
‖x(ui , vi ) − us , vs )‖2
f= ∑ ∑
ws
i=1 s
Eq. 12.13
where ws is calculated as the distance between the
two nodes in a reference equilateral triangle, shown
in Figure 12.16 along with the connectivity of the
springs. The choice of a six spring system means that
the method is applicable to any point distribution at
any order. For example, Figure 12.16 shows a P = 4
triangle with a Gauss-Lobatto-Legendre
distribution along the edges and a triangular Fekete
distribution for the face interior points.
To optimize the energy of the system a bounded
version of the BFGS algorithm is used494. This
bounding is necessary due to the limits of the
parameter space in the CAD entities. Figure 12.17
shows the effectiveness of this optimization
procedure. The left-hand figure shows the surface Figure 12.16 Distribution of points in a
mesh before optimization, and the right-hand figure Fourth-Order Triangle and the Six Spring
System Linking the Free Nodes - Fekete
after optimization of the spring networks. In this
Distribution
case, the highly distorted CAD surface of the rounded
leading edge of a wingtip causes suboptimal surface
mesh generation.
The figure clearly shows that the high-order triangles are deformed under the linear mapping.
However, when this optimization procedure is performed, the mesh edges approximate geodesic
Figure 12.17 The High-Order Surface Created Using the Affine Mapping with / without Optimization in
a Region of High Distortion in the CAD Surface (left to right)
494Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. A limited memory algorithm for bound constrained optimization.
SIAM Journal on Scientific Computing 16, 5 (1995).
283
lines better and the resulting surface mesh is smoother. There are a number of advantages in using
this simple approach to high-order surface mesh optimization. In addition to being simple to
implement and obtain analytical gradients of the functional, the primary advantage it is relatively
cheap. This is because the bottom-up, disconnected, nature of the method means that each individual
optimization problem is small. However because of the disconnected approach it cannot always
achieve optimal meshes. It can however alleviate invalidity and low quality elements induced by the
curved surface mesh.
12.4.3 High-Order CFD Methods in Industrial Applications
The goal of transferring high-order CFD methods from academic to industrial, production impacting,
applications is as yet unrealized. Using high-order CFD as a production tool for industry has a number
of hurdles to overcome, chief among which are:
This chapter will focus on the last point, with a focus to achieving practical results for industrial
applications. The robustness found in commercial linear mesh generators is due to a number of
factors. Primarily the system will have a series of failsafe options which allow the mesh generator to
recover and continue in the case of a critical error. These fail safes are developed over time by looking
at a case, seeing what works and what does not and depending a solution for the problems at hand
before moving onto the next test case. This philosophy has allowed a number of commercial mesh
generators achieve significant levels of robustness over a range of very complex cases. This kind of
methodology has, as yet, not been applied to high-order mesh generators.
Each example in the literature aims to achieve complete geometric accuracy without compromise.
This chapter explores the idea of relaxing strident criteria on the high-order mesh with the goal of
producing meshes on complex geometries which would otherwise be truly impossible. The study
focuses around the idea of obtaining high-order CFD results on complex geometries with the goal of
achieving practical outcomes. That is, for most aerodynamic external flows, studying the lift, drag and
vortex behavior of the flow. The goal was to produce, by any means, meshes that obtain results
without compromising the outcomes. To achieve this, when considering an a posteriori approach to
high-order meshing, it is vital to think beyond the limitations of the linear mesh generator
It would, on the face of it, seem possible and even trivial, to take these already robust tools and simply
extend them to achieve high-order meshes. In theory, all the positive properties of the linear meshes,
such as the robustness and CAD capability, would be inherited, but this is far from the case. This
chapter focuses around the production of meshes for three geometries which represent the design
progress of a high aerodynamic performance road car.
12.4.3.1 Methodologies
One of the most significant factors contributing to the robustness of commercial linear mesh
generators is that prior to making the mesh the CAD surface will be linearized. The surface is
triangulated with no consideration for quality but simply CAD accuracy. The surface triangulation is
usually produced by repeatedly subdividing the surface until the deviation from the true surface of
the edges of the triangles is less than some tolerance. The final mesh is then built upon this linearized
CAD representation. The primary advantage is that any poor quality CAD features can be paved over,
removed or altered easily within the triangulation. The disadvantage is the reduced CAD accuracy of
the resulting mesh. This can be offset by increasing the resolution of the linearized CAD surface.
However for finite volume CFD methods, where these meshes are used the most, the loss in CAD
accuracy does not have a significant impact on the final flow result. Most critically, when considering
high-order meshing, this means that the surface mesh vertices cannot be located in the parameter
284
spaces of the surfaces without using some form of reconstruction of this information, which can
introduce errors and robustness issues. This makes the idea of curving the surface elements from a
generically made linear mesh very challenging.
Two strategies have been developed which provide the relatively simple creation of high-order
meshes for extremely complex cases. The first is based on being able to know the parametric
information associated with the linear surface mesh, hence high-order curving of the surface is a
relatively easy task and shall be referred to as analytic curving. The second is on being able to
reconstruct the CAD information or project the linear mesh onto the CAD surfaces. The approach has
significant issues with speed and robustness but offers an alternative method to curving the surface,
which will be referred to as projection curving.
12.4.3.2 Analytic Curving
The process of curving the geometric surface of a high-order mesh has been considered as a bottom-
up process. That is to say that when curving the surface mesh entities each surface mesh vertex is
aware of the CAD object, whether that be a curve or a surface, it belongs to as well as its associated
parametric coordinates. Armed with this information, obtaining an initial curving of the surface is
quite a simple.
The first version of the pipeline to combine robust linear meshing with high-order tools was designed
to preserve the simplicity of analytic curving and the robustness of the linear volume generation. To
achieve this, the surface mesh was produced using the linear meshing tools within NekMesh and then
exporting this surface mesh to Star-CCM+ for the generation of the volume mesh. All parametric
information was preserved and therefore curving the surface was a simple task. The commercial
linear meshes was then used to build the near-wall macro prism layer and mesh the interior of the
domain with tetrahedral.
Robustness of linear boundary layer generation in this approach is obtained without compromising
the simplicity and robustness of the high-order surface generation. However it was found that the
use of this approach, which was applied to one of the examples shown later, resulted in dozens of
cycles of running NekMesh, Star-CCM+ meshing, and altering the CAD to obtain a mesh.
A second approach overcame a number of these shortcomings by allowing the linear mesh generator
to create the surface mesh itself. This had a significant increase in the quality and ease in creating the
meshes. In this process NekMesh generated the linearized CAD surface, using its own CAD engine,
and then exported this to Star-CCM+. The linear surface mesh generator would then use this as its
base for generating the mesh. The data structures used allowed for robustly and cheaply obtaining
the parametric information of the surface mesh vertices. This is because the linearized CAD
triangulation is divided into sections of the CAD surfaces which it came from. It is possible to get the
linear surface mesh in Star-CCM+ to obey the boundaries of the CAD. That is, it will not generate a
triangle which has area over two different CAD surfaces. This information permitted the identifying
of which CAD surface the surface mesh vertices came from.
Obtaining the parametric coordinates was more difficult. The parametric coordinates of the
linearized CAD triangulation are stored, therefore for a given surface mesh vertex an approximate
parametric location is known by taking the value from the nearest CAD triangulation point. Using the
CAD engine, the actual location can be obtained through reverse projection, a processes which is
nonlinear, usually slow and can fail but it can be made significantly faster and more robust by having
an approximate location. The surface mesh edges which exist on CAD curves can then be inferred by
looking through the data structures and seeing which edges are connected to two triangles which are
on two different CAD surfaces. Once all parametric coordinates are obtained, the high-order meshing
process can continue as before.
This analytic curving approach was found to be significantly easier to use as it only required the
running of Star-CCM+ once and NekMesh twice, first to produce the CAD linearization and then to
reimport the Star-CCM+ mesh and curve it. However, it still had one critical aw: it required the linear
285
surface mesh to explicitly obey the boundaries of the CAD surfaces. This is not the worst requirement,
but when moving to more complex CAD models, those with in excess of thousands of surfaces, it
proved to be a significant limitation. This was due to requiring an impractical amount of effort in
preparing and cleaning the CAD for the size of problems being tackled. The CAD healing was required
because while the user can tell Star-CCM+ to obey the surface patches beyond a certain limit of mesh
quality, it will no longer do so. The surface triangles could not be curved in this scenario.
12.4.3.3 Projection Curving
The final high-order meshing approach took proactive measures to ensure the meshing pipeline
would be as robust and easy to use as possible. In this case, all CAD information is reconstructed after
the generation of the linear mesh. This places no criteria, or additional steps on the linear meshing
stage. As stated this approach can suffer from a lack of robustness and computational cost. However,
here a number of steps are taken to ensure the method is viable for even the largest CAD models. The
method begins by importing a linear mesh info NekMesh that is a reasonably close representation of
the underlying CAD model. The CAD model is then processed in two ways. Firstly, each surface in the
model is linearized (triangulated). The triangulation for each surface is then stored alongside the
curvilinear CAD model so that two CAD representations exist. Secondly the model is processed into
a tree structure. A bounding box is determined for each CAD surface. Each box is slightly distinctive
in each direction by 5%.
The surface mesh nodes are then processed to obtain their CAD parameterization information. For
each node, the process begins by obtaining a list of potential parent surfaces by querying the
bounding box. If a node is within a box, the surface is added to the list. Because of the use of the tree,
this process is itself quick and serves to reduce the potential number of surface operations down
from the whole model to just a few candidates, typically 2 or 3. The node is projected onto the surface
for each of the candidate surfaces. This is done by firstly finding the nearest vertex in the surface
triangulation and then using this as an initial guess for the non-linear problem of surface projection.
The parent surface to the node is then identified as the closest surface. A few additional
considerations must be made.
Because of the linearization of the CAD prior to linear meshing, it is quite possible that the surface
node, may not actually lie on the CAD surface. Therefore once the node's surface has been identified
it is then moved to the surface. It is important to remember that moving the surface node like this
may induce inverted elements. This must be tested for, if moving the surface node creates inverted
elements the node is placed into an exclusion set and left in its original location. Likewise it is possible
that the node may need to be moved a significant distance to the surface, while this move may not
induce invalid elements it may significantly lower the quality.
Therefore a node is not moved if the required displacement is greater than 10% of the length of the
edges in the local mesh and it will also be placed in the exclusion set. In other words, the node will
not move a significant distance with respect to the size of the elements in the local region. The
generation of the curved surface then proceeds as in the analytic approach, bar a few exceptions.
Firstly any mesh entity, face or edge, which has a mesh node in the exclusion set will not be curved
and left linear. This is because either the CAD information is not available or it would be unreliable.
Secondly, curving inducing an invalid element will be reverted and the element remains linear.
Lastly, if a mesh entity overlaps two or more CAD surfaces, i.e. if in an edge each of the vertices have
CAD parents which are different, the system will use a projection based approach to curve the entity.
High-order nodes will be placed along the linear mesh entity producing a high-order but straight-
sided entity. To curve the mesh entity the nodes are then projected onto the CAD surface and moved
to the surface.
A small but not insignificant proportion of the surface mesh entities will be straight-sided as opposed
to curved. This will have significant impact on the geometric accuracy of the mesh and in turn on the
solution. However, in the results below this will be shown to have little noticeable impact. A full
286
investigation into this has not been conducted due to time and resource constraints, but this work
proves that high-order meshes can be produced on industrial models and sensible results obtain with
little meshing effort. Indeed the final projection meshing pipeline only required one execution of the
linear mesh generation and NekMesh, with no need for repetition with either system or the CAD.
The simulations were performed using the incompressible Naiver-Stokes solver in Nektar++. This
solver uses an implicit large-eddy simulation (iLES) formulation. Each of the simulations were run at
a Reynolds number of 250,000, which is lower than the experimental value of approximately 2
million but still higher than other simulations which have been attempted of this type. The
simulations were run at a polynomial order 5 and to increase the accuracy of the integration the
quadrature order used was 7. This meant that the meshes had to be generated at P = 7 to ensure the
elements were valid with this quadrature. Numerical stability was a big factor in these simulations
and is one of the reasons the Reynolds number is lower than experimental values. Indeed new
stabilization techniques were developed which are an extension of the work. Meshes in this study
ranged from approximately 2-3 M elements as the geometries became more complex and were a
combination of triangular prisms in the near-wall region and tetrahedral in the rest of the domain.
287
The study began with design 1 (D1): the baseline model of the car currently in production. This model
was meshed using the first analytic meshing pipeline. A process which in total took approximately 4
months of going back and forth between CAD healing and linear and high-order meshing to produce
a viable mesh. The process succeeded but was deemed too complex for robust high-order meshing.
This mesh contained 2.2 M elements of which, approximately 0.6 M were prisms within a boundary
layer mesh. The simulation was conducted under the conditions detailed previously. Because of the
aggressive linearization method used to ensure the ability to produce the mesh, its quality, by the
scaled Jacobian measure, was guaranteed to have a minimum value greater than 0.1. This was the
case for all three meshes presented in this section. The study moved onto work with a second
geometry (D2) which was designed based on the results of the D1 simulation. The D2 mesh was
created using the second analytic curving method. This mesh contained 2.6 M elements with
approximately 0,7 M boundary layer prisms. Figure 12.19 shows, firstly, the mesh of the two
designs, D1 (left) and D2 (right), and the corresponding flow solutions is shown in Figure 12.20. In
the case of these images, the mesh is indeed curved but for clarity only the curved edges of the
elements are shown. For further information, please consult the [Turner]495.
From the high-order D1 simulation two key findings were made, neither of which were identified in
low-order RANS simulations. Firstly, there appeared to be strong vortical structures hitting the
drivers helmet. This was fixed in D2 with a redesigned console area in front of the driver. The D2 is
contour shows clearly that these structures now pass cleanly over the drivers head and are in
somewhat less noisy in terms of the substructures. Secondly, the roll hoop produced significantly
more drag and separation than predicted. D2 had a redesigned roll hoop with an airfoil profile as
495Michael Turner, “High-Order Mesh Generation For CFD Solvers”, Imperial College London, Faculty of
Engineering Department of Aeronautics, A thesis submitted for the degree of Doctor of Philosophy, 2017.
289
opposed to a circular cylinder. This airfoil profile was slightly angled to help control the flow over the
new Gurney ape at the rear of the car. This, combined with redesigned diffusers on the underside, led
to increased downforce over D1. The trend of changes in downforce and drag between the D1 and D2
simulations was well predicted by the high-order simulations and matched the trend in the RANS
results.
Between the two simulations, D1 and D2, despite no geometry changes on the forward part of the
underside of the car, the front splitter showed significant variation in the flow physics. The D1
simulation shows significantly lower pressure in this region and separation of the flow. In contrast,
D2 shows much smoother, attached ow. As there is little variation in the geometry in these two
regions between the two CAD models the most likely explanation of the greatly differing results is
mesh error. To investigate this, Figure 12.21 shows the mesh in these regions. Recalling that these
meshes were made using the two different analytic curving approaches, it is observed that the
mesh is much smoother in gradation for D2. A possible explanation of the non-physical separation is
that the lower mesh quality of D1 induced this error. This result demonstrates the sensitivity of high-
order simulations to mesh quality as well as the higher quality obtained using mesh from the second
analytic curving method, where the commercial linear mesh generator had some degree of control
on the surface mesh.
Figure 12.21 Underside of the RP1 car surface mesh, design 1 (D1) left, design 2 (D2) right
290
The study concludes with one final design (D3) which is full aerodynamic upgrade over the previous
two designs. This car is designed to achieve extremely high levels of downforce specifically for track
racing. This design includes a fully redesigned floor, front splitter and the addition of a rear wing. The
ride height has also been altered, raising the car at the rear for increased diffuser performance and
lowering the front of the car to increase in-ground effect of the front splitter. The increased
geometrical detail lead to an increase in mesh size for this geometry of 3.1 M elements, just under a
third were boundary layer prisms. In the following images there is a noticeable offset in the
geometries of the two cars, this is because of the alteration in ride height.
The D3 mesh which was also the largest at approximately 4 M elements was made using the
projection method. This method can be aggressive in regions of the surface mesh straight sided
where the curving process either would not work or would produce invalid elements. This meant
that a small percentage of the surface was not geometrically accurate. However as the results show
for D3, because they are consistent with D2 and show no obviously non-physical flow region, this
aggressive approach had little impact. Indeed it allowed the mesh to be created almost effortlessly, it
required only one execution of the linear meshing and NekMesh each, it also required no CAD healing
or repetitive cycles. However a more conclusive study with more simulations is required to draw
stronger conclusions on whether the compromised geometric accuracy is a compromise worth
taking. The early results here show that it may well be.
12.4.4 Application of Optimization Framework
This section outlines the application of the optimization framework to the generation of triangular,
quadrilateral, tetrahedral and prismatic meshes and combinations thereof. This section begins with
a brief discussion of how to compare the relative qualities of each mesh. We follow closely the
development in [Turner]496.
12.4.4.1 Quality Metric
The current state of quality metrics for high-order mesh analysis is a confusing one. However a
metric with which to comment on the meshes in this work must be selected. The clear choice, simply
because it is the most widely used despite its flaws is the scaled Jacobian. However within the context
of this work, this measure has a key drawback: this work studies a posteriori mesh generation and
therefore looks primarily at the mapping ⊽Φ. Whereas this quality metric is analyzing the mapping
ΦM. Therefore it would be more logical to use the scaled Jacobian of the mapping Φ. The quality
element Qe that is used to analyses meshes in this work is therefore defined to be
e
Minξ [det∇ϕM (ξ)) det (∇ϕ−1
I (ξ))]
Q = ∀Ω𝑒 ⊂ Ω
Maxξ [det∇ϕM (ξ)) det (∇ϕ−1
I (ξ))]
Eq. 12.14
Where I denotes the ideal element. It is possible to further define the overall quality of the mesh by
considering the minimum metric over the mesh, defined as
Q= Min
⏟ Qe
1≤ e ≤ Nel
Eq. 12.15
These quality metrics lie in the range (-∞ , 1] and from a physical viewpoint make the assumption
that, an `ideal' element should be as close to straight-sided as possible. Results near Qe = 1 are
considered to be the highest quality, as this suggests smoothness of the Jacobian, and any element
with Qe < 0 is an invalid element. The key difference between Jes and Qe is that Qe provides a
496Michael Turner, “High-Order Mesh Generation For CFD Solvers”, Imperial College London, Faculty of
Engineering Department of Aeronautics, A thesis submitted for the degree of Doctor of Philosophy, 2017.
291
measurement of the deformation between the straight-sided and curvilinear element. This makes no
difference in the case of triangular and tetrahedral elements (asides from a multiplicative constant)
since ⊽ΦI is a linear mapping. However, in other elements possessing quadrilateral faces, it is possible
to have deformation in even a straight sided or planar element due to ΦI being a quadratic mapping.
This new quality metric is therefore invariant to element type, allowing the fair assessment of the
quality of hybrid meshes.
12.4.4.2 Case Study 1 - Simple 2D Demonstration Case
An initial mesh of nine triangles, two of which are invalid, are untangled to produce the valid meshes
shown in Figure 12.22-b and Figure 12.22-c, to show the capability of the framework to correct
invalid elements. This shows how domain interior deformation to improves the quality of a
curvilinear mesh and the ability to correct invalid elements. In this case, the mesh is initially invalid
with Q = -0.24. The mesh further improved based on the Hyper elastic and Distortion functional.
Each of the figures the quality distribution Qe of each element is shown. The results demonstrate that
elasticity functional produce a higher quality final mesh.
tetrahedral meshes is simply the lack of suitable integration rules above 10th order. The quadrilateral
mesh example shows that the method works for very high-order meshes.
Figure 12.23 Optimization of 10th Quadrilateral Mesh Showing the Initial Configuration and
Optimization using the Hyper Elastic and Distortion Functional
Figure 12.25 Optimization of 4th Order Sphere mesh from the Initial Configuration
using the Hyper Elastic and Distortion Functional
Figure 12.24 Shows the Displacement Residual and Quality, Q, of the Cube Sphere Mesh
294
The resulting distribution is seen in the bottom right figure, where a noticeable shift towards Qe = 1
can be observed. A few very poor quality tetrahedral remain, which can be seen in the bottom left of
the figure. These are solely due to the initial linear mesh, which in this region contains a number of
at elements, which in turn limits the capability of the framework. This highlights need for further
improvement in linear mesh generation for high-order generation. An interesting additional point to
note is that the hyper elastic functional was the only one able to untangle the mesh from the initial
invalid configuration. We posit that further investigation into the integration order is required to
497 Brodersen, O., and Stuermer, A. Drag prediction of engine-airframe interference effects using unstructured
Navier-Stokes calculations. 19th AIAA Applied Aerodynamics Conference (Anaheim, 2001), no. 2001-2414.
498 Untangling and optimization of the DLR F6 geometry. The left figures show the mesh before and after
optimization. On the right the distribution of the quality metric Qe is shown before and after optimization with
the hyper elastic functional.
295
further understand this phenomenon. In theory each of the functional have the necessary properties
to achieve a valid mesh, therefore the failure of the other three is most likely due to the accuracy of
the numerical method. It is also possible that the regularization approach is not suitable for all the of
the functional and a bespoke regularization method is required for each.
Figure 12.27 Cross Section of a Semi-Sphere Case Highlighting the Sliding of CAD
Curves Along the Surface
499 Cross section of a semi-sphere case highlighting the sliding of CAD curves along the surface. The left-hand
image shows the initial mesh and the right-hand figure shows the optimized mesh. Note that the color of the
surface triangles is not related to mesh quality.
500 Spalart, P., and Mejia, K. Analysis of experimental and numerical studies of the rudimentary landing gear.
Proceedings of the 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace
Exposition (Orlando (FL), USA, 4-7 January 2011).
296
Figure 12.28 Hybrid prismatic-tetrahedral mesh of the Boeing reduced landing gear configuration
before (a) and after (b) optimization, and after the isoperimetric splitting is applied (c). Note that the
color of the surface triangles is not related to mesh quality
501Moxey, D., Green, M., Sherwin, S. J., and Peir_o, J. An isoparametric approach to high-order curvilinear
boundary-layer meshing. Computer Methods in Applied Mechanics and Engineering 283 (2015).
297
Figure 12.28 shows the `macro' mesh before and after optimization, for which the hyper elastic
functional has been used since this has been shown to produce the highest quality meshes. The final
mesh created after the macro layer has been split is also shown. For the purposes of clarity, the
tetrahedral have been removed. Overall the figure shows that whilst the initial configuration before
optimization is of a reasonable quality, there are a number of lower-quality elements on the
shoulders of the tires. The quality in this area, as well as throughout the mesh generally, is then
improved in optimization across all of the elements shown. The figure also show the quality of the
prismatic layers after splitting, where in general it can be seen that this approach produces a high
quality mesh.
To quantify the increase in element quality, a number of element quality histograms for this case in
Figure 12.29 are shown. Firstly, the overall distribution from the initial configuration seen in
Figure 12.29-a improves substantially under optimization, as shown in Figure 12.29-b, where a
clear shift to the right of the graph is observed, i.e., towards an improved mesh quality. However, it
should be noted that Figure 12.29. Element quality histograms of the Boeing reduced landing gear
configuration for initial configuration and various optimization settings.
this optimization was conducted with a material constant of ν = 0.45, which means that the elastic
solid which is being relaxed is very stiff. Figure 12.29-c shows that reducing ν to 0.4 leads to a mesh
that, whilst being improved over the initial configuration, is overall of a lower quality compared to ν
= 0.45. This observation aligns well with the results reported in reference502, where meshes
Figure 12.29 Element Quality Histograms of the Boeing Reduced Landing Gear Configuration for
Initial and various Optimization Settings
502 Poya, R., Sevilla, R., and Gil, A. J. A unified approach for a posteriori high-order curved mesh generation using
solid mechanics. Computational Mechanics 58, 3 (2016).
298
generated using values of ν close to the incompressibility limit lead to higher quality elements.
Curiously, both the distortion and Winslow functional lead to a decreased quality of the mesh, as
shown in Figure 12.29-d for the distortion functional.
It should also be noted that the use of the isoperimetric splitting of the macro layer was a necessity
when generating high-order anisotropic boundary layers. Generally it was found that, when using
the variation framework on anisotropic elements, the optimization algorithm would very regularly
fail to find a new minimum.
It is logical that because of the shape of the prismatic element, the sensitivity of the functional to
nodal location is very strong, meaning that very large gradients are seen in the optimization. As
shown before, when the gradient is large, high degree quadrature is required and the Q = P+6 rule is
not sufficient. While an adaptive or very high-order integration would allow for the optimization of
anisotropic elements, it is simply not required when using the isoperimetric splitting, meaning that
this approach is more computationally viable. While other examples of a posteriori high-order mesh
generation have shown the ability to correct anisotropic elements, they regularly report the need
for greatly increased iteration count in the solution of either their PDEs or optimization processes. It
is proposed that this is a similar phenomenon to what is experience with failed optimization.
12.4.4.8 Case Study 6 - Example of application: NACA Wing
This section describes an example of a 3D geometry for which high order meshes have been created
using the NekMesh system. Of particular interest to the high-order community is the test case of a
high angle of attack symmetric NACA wing with a rounded wingtip503. In this high Reynolds number
case (Re = 4.6 x 106), as shown in numerous numerical examples, it is challenging to accurately
predict the position of the wingtip vortex when compared to experimental data. This is due to the
strong vortical structures and complex boundary layer physics. To demonstrate the validity of the
503Lombard, J.-E. W., Moxey, D., Sherwin, S. J., Hoessler, J. F. A., Dhandapani, S., and Taylor, M. J. Implicit large-
eddy simulation of a wingtip vortex. AIAA Journal (2015).
299
mesh produced by NekMesh, an incompressible flow simulation at Re = 105 was performed using
Nektar++.
Due to the convex nature of the geometry, the addition of a prismatic layer adjacent to the geometry
meant that the high-order mesh was valid without needing to resort to variation module to correct
any elements. Each mesh of this geometry was created from the CAD definition using four user
parameters only: δmin, δmax, ε and P, where P is the desired order of the mesh. Meaning that the meshes
NekMesh were significantly easier to produce than by other means. Figure 12.30 shows an image
of the surface of the NACA wing geometry. This mesh has a anisotropic boundary layer, as shown in
Figure 12.31. Regions of high curvature such as the leading edge of the wing and the curved wing
tip, have increased resolutions compared to other parts of the mesh, as would be expected with the
automated specification system. On the suction surface the resolution of the mesh has been manually
increased to capture a separation region in the flow solution. Despite this modification, the octree
system has ensured that the mesh remained smooth, without large changes in element volume.
Further information is available in [Turner]504.
504Michael Turner, “High-Order Mesh Generation For CFD Solvers”, Imperial College London, Faculty of
Engineering Department of Aeronautics, A thesis submitted for the degree of Doctor of Philosophy, 2017.
300
Figure 13.1 Comparison of Hex (16 K Cells) and Tet (440 K Cells) for a Pipe with 90 Degree Bend
505 Fluent, “Meshing and CFD Accuracy”, CFD Summit, June 2005.
301
Figure 13.2 Results of Hex vs Tet Meshes as well as Hybrid Mesh in a Pipe with 90 Degree Bend
506 Mitja Morgut, Enrico Nobile, “Comparison of Hexa-Structured and Hybrid-Unstructured Meshing Approaches
for Numerical Prediction of the Flow Around Marine Propellers”, First International Symposium on Marine
Propulsions smp’09, Trondheim, Norway, June 2009
507 See Previous.
302
508 Abdel-Maksoud, M., Menter, F., and Wuttke, H.. ‘Viscous flow simulations for conventional and high skew
marine propellers’. Ship Technology Research, 45:64 – 71. 1998.
509 Chen, B. and Stern, F. ‘Computational fluid dynamics of four-quadrant marine-propulsor flow’. Journal of Ship
cavitation on a marine propeller using a rans cfd code’. 5th International Symposium on Cavitation, CAV2003,
Osaka, Japan, 2003.
512 Rhee, S. H. and Joshi, S. ‘Computational validation for flow around marine propeller using unstructured mesh
based navier-stokes solver’. JSME International Journal, Series B, 48(3):562 – 570, 2005.
513 Chesnakas, C. and Jessup, S. ‘Experimental characterization of propeller tip flow’. Proc. 22nd Symposium on
presented but not visible in Figure 13.4, is the length in direction of uniform flow of rotating. To
simulate the flow around a rotating propeller the following boundary conditions were set. On the
Inlet boundary, velocity components of uniform stream with the given inflow speed were imposed,
while the turbulence intensity was set to 1%
of the mean flow. On the Outlet boundary the Rotating Fixed
static pressure was set to zero. On the outer A P5168 A P5168
surface and on the part of the hub included in
Hmid 0.70 D 0.57 D
Fixed free-slip boundary conditions were set.
Lmid 0.14 D 0.75 D
On the blade surface and on the part of the hub
L1 2.0 D 1.5 D
included in Rotating no-slip boundary
conditions were set. On the periodic L2 6.0 D 5.0 D
boundaries (sides of the domain) rotational H2 1.8 D 1.4 D
periodicity was ensured. As turbulence
Table 13.1 Dimensions of Domains – (Courtesy of
model, the two equation SST (Shear Stress Morgut & Nobile)
Transport) model with the automatic
treatment of wall functions was employed.
13.2.4 Meshing
All the meshes used in this study were generated using the commercial meshing tool ANSYS-ICEM
CFD 11. For both propellers the Fixed part was discretized only with a unique structured mesh, while
Rotating was discretized with both meshing approaches. Moreover in the case of propeller P5168
were used for Rotating two meshing regimes (coarse, fine). The number of nodes of meshes of
propeller A and propeller P5168 are visible in Table 13.2 and Table 13.3. Since ANSYS-CFX 11
employs the node-centered finite volume method, (More precisely a Control Volume-based Finite
Element Method - CVFEM) the
number of nodes was chosen Fixed Nodes Rotating Nodes
as a parameter of congruence. Type Hexa Hexa Hybrid
For that reason, Grid1 and Grid 1 223820 784914 -
Grid2, Grid3 and Grid5, Grid4 Grid 2 223820 - 785344
and Grid6 have a similar
number of nodes, Table 13.2 Grids for Propeller A– (Courtesy of Morgut & Nobile)
respectively. To generate
structured meshes of both
propellers, Fixed and especially Rotating were decomposed in a large number of blocks and proper
nodes distributions were used to control dimensions and quality of the cells. The single hybrid
meshes were instead
generated with two Fixed Nodes Rotating Nodes
successive steps. First Type Hexa Hexa Hybrid
surface meshes and Grid 3 Coarse 229437 348810 -
volume tetrahedral Grid 4 Fine 229437 711932 -
meshes were created Grid 5 Coarse 229437 - 340400
using the robust Octree Grid 6 Fine 229437 - 741378
method. Then in order
to resolve the turbulent Table 13.3 Grids for Propeller P5168 – (Courtesy of Morgut & Nobile)
boundary layer on the
solid surfaces, with the similar resolution to the one used with structured meshes, layers of prism
were placed around the hub and blade. In the case of propeller A, 6 layers were generated and in the
case of propeller P5168, 15 layers were placed. The average values of y+ on solid surfaces (hub,
blade) of propeller A and propeller P5168 were 20 and 15 respectively. The y+ was defined as y+ =
μT y/ν where μT = (τw/ρ)1/2 is friction velocity, y is normal distance from the wall, ν is kinematic
305
viscosity, ρ is density and τw is wall shear stress. In the case of propeller P5168 during the refinement,
the height of the first node off the solid surfaces was kept unchanged. For propeller P5168 the
structured mesh of Fixed is visible in Figure 13.5 (a). Structured and hybrid meshes on the blade
and hub surfaces are depicted in Figure 13.5 (b-c).
(a) Hexa Mesh of part (b) Surface mesh, Hexa (c) Surface mesh, Hybrid,
Fixed, Propeller P5168 Fine, Propeller P5168 Propeller P5168
Figure 13.5 Meshing for Propeller P5168– (Courtesy of Morgut & Nobile)
13.2.5 Results
To study the influence of the grid on the quality of the prediction of the flow around a marine
propellers, numerical data were compared with available experimental data. Propeller A was used as
a preliminary study. Comparison was carried out only on global quantities of the flow while for
Propeller P5168 comparison was made, analogous as (Rhee and Joshi, 2005), also on the local values
of the flow in a downstream location x/R=0.2386 measured from the propeller mid plane, where R
is the radius of the propeller and x is the distance. The global values considered were thrust
coefficient KT , torque coefficient KQ and efficiency η defined as:
T Q J KT
KT = , KQ = , η=
ρn2 D4 ρn2 D5 2π K Q
Eq. 13.1
where T[N] is the thrust, Q[Nm] is the torque, η[rps] is the rotational speed of propeller, D[m] is the
diameter of the propeller, ρ[kg/m3] is the density of the fluid. J=V/nD is the advance coefficient,
where V[m/s] is the velocity of uniform flow. Circumferentially averaged velocity components, and
root-mean square values of turbulent velocity fluctuations were selected as local flow values. The
root mean square of turbulent velocity fluctuations q was defined as
q = √2k
Eq. 13.2
where k is the turbulent kinetic energy. In the following graphs and contours all local flow values are
non-dimensionalized by velocity of the uniform flow V. Relative percentage errors present in the next
tables are defined as
306
13.2.5.1 Propeller A
In the case of propeller A the simulations
were carried out for a wide range of
advance ratios. From Table 13.4 and
Figure 13.6 it is seen that numerical
results of different meshing a approaches,
are very close to each other and also in line
with the experimental data, especially
within the range J = 0.1 - 1.0. Moreover
differences between results obtained
using different meshes are less than 4%.
The relative percentage errors, within the
range J = 1.1 - 1.2 but especially at J = 1.2,
are very height for both meshing
approaches as expected, because thrust
and torque are both almost null.
Table 13.6 Relative Percentage Differences of Computed Values Between Finer and Coarser Mesh for
propeller P5168 – (Courtesy of Morgut & Nobile)
515Mitja Morgut, Enrico Nobile, “Comparison of Hexa-Structured and Hybrid-Unstructured Meshing Approaches
for Numerical Prediction of the Flow Around Marine Propellers”, First International Symposium on Marine
Propulsions smp’09, Trondheim, Norway, June 2009.
308
circumferentially averaged velocity components in axial (Vx), tangential (Vt) and radial (Vr) direction
vs non dimensioned radial coordinate (r/R) for various J, where r is the radial distance from the
centerline of the hub. From these figures it is visible that the predicted trends of velocity components
of both structured and hybrid meshes are very similar and differences are hard to detect. Moreover
the axial and tangential velocity components compare well with the experimental data. The radial
components, instead, are not so close to the experimental data, but their value are lower and
therefore also the experimental uncertainty are larger.
It is however noteworthy that within the range r/R = 0.6 -1.0 even though computed values are under
predicted they seem to have the same trends as the experimental data. A comparison of contours of
the root-mean square values of turbulence velocity fluctuations q on the plane x/R = 0.2386
downstream of the propeller mid plane, for J = 1.1 is presented in [Morgut and Nobile]516. From a
qualitative point of view the contours agree well with experimental data, but from a quantitative
point of view it is clear that the magnitude of turbulence kinetic energy is under predicted especially
on the hybrid-unstructured mesh where is clearly visible the effect of excessive numerical diffusion.
It seems therefore that, at least at model scale, the differences, between hexa-structured and hybrid
structured meshes do affect the accuracy in the predictions of the turbulence quantities but the effect,
for global quantities is modest. It is hard - or even impossible to extrapolate this conclusions to real
scale, given the different qualitative and quantitative character of turbulent phenomena.
13.2.6 Conclusions
In this study a comparison between hexa-structured and hybrid-unstructured meshing approaches
for the prediction of the flow around a marine propellers working in uniform flow was carried out.
The study was performed on two five-bladed propellers in model scale. Hexa-structured and hybrid-
unstructured meshes used for comparison were generated with the commercial meshing tool ANSYS-
ICEM CFD 11. The simulations were carried out with the commercial RANS solver ANSYS-CFX 11,
using the moving frame of reference approach and employing the SST (Shear Stress Transport) two
equation turbulence model. Computational results from both meshing approaches were compared
against the experimental data. In the case of propeller A the comparison was made only on global
values while for propeller P5168 the comparison was carried out also on local values of the flow field.
The numerical values of the thrust and torque coefficients computed using structured and hybrid
meshes are both in line with the experimental data. The performance curves computed using
structured meshes are slightly better than those predicted using hybrid meshes. The differences
in computed values, using different meshing approaches and except for the extreme operational
conditions, are less than 4% for propeller A and less than 3% for propeller P5168. Also the velocity
profiles of propeller P5168, computed using different meshing approaches are in line with the
experimental data, especially for axial and tangential components, (See [Morgut and Nobile]517). The
overall results suggest, that for the numerical prediction of propulsive performances the use of
hybrid meshes might be an adequate choice at least at model scale. They can offer a similar accuracy
to the one of structured meshes and moreover they need a less effort to be generated. On the other
hand, at the model scale and for the CFD code employed, the hybrid meshes do not seem to be the
preferred choice for a detailed investigation of the flow field since they introduce and excessive
diffusion in the solution.
516 Mitja Morgut, Enrico Nobile, “Comparison of Hexa-Structured and Hybrid-Unstructured Meshing Approaches
for Numerical Prediction of the Flow Around Marine Propellers”, First International Symposium on Marine
Propulsions smp’09, Trondheim, Norway, June 2009.
517 Mitja Morgut, Enrico Nobile, “Comparison of Hexa-Structured and Hybrid-Unstructured Meshing Approaches
for Numerical Prediction of the Flow Around Marine Propellers”, 1st International Symposium on Marine
Propulsions, Trondheim, Norway, June 2009.
309
13.3 Case Study 3 – Structure & Unstructured Hybrid Meshing and its effect on
Quality of Solution on Turbine Blade
Automatic robust unstructured hybrid meshing is indispensable for the success in design
optimization518. In addition, it is important to maintain the mesh quality for deformation of geometry
throughout the optimization process for the reliability of optimal design. Mesh adaptation is useful
to capture the flow feature which can highly affect flow properties. Therefore, the present hybrid-
meshing technique with adaptation is applied for various turbomachinery components to validate its
robustness. In addition, a turbine blade is used to compare the effects of mesh for the optimization.
13.3.1 Results
➢ Effect of mesh quality for design optimization with large deformation of turbine blade is
investigated by using structured mesh, unstructured hybrid mesh without adaptation, and
unstructured hybrid mesh with adaptation (Figure 13.8 (a-c)).
➢ Flow around turbine blade is computed by structured and hybrid meshes (Figure 13.8 (d-
e)). Because of different mesh topology and quality, the flow is totally different. In the figures,
structured mesh can capture the wake region well compared to hybrid mesh without mesh
adaptation.
Daisuke Sasaki, Caleb Dhanasekaran, Bill Dawes, Shahrokh Shahpar, “Efficient Unstructured Hybrid Meshing
518
and its Quality Improvement for Design Optimization of Turbomachinery”, European Conference on
Computational Fluid Dynamics, ECCOMAS CFD 2006.
310
13.4 Case Study 4 - Evaluation of Structured vs. Unstructured Meshes for Simulating
Respiratory Aerosol Dynamics519
In simulating biofluid flow domains, structured hexahedral meshes are often associated with high
quality solutions. However, extensive time and effort are required to generate these meshes for
complex branching geometries. In this study, conducted by [Samir Vinchurkar & Worth Longest]520,
to evaluates potential mesh configurations that may maintain the advantages of the structured
hexahedral style while providing significant savings in grid construction time and complexity.
Specifically, the objective here is to evaluate the performance of unstructured hexahedral,
prismatic and hybrid meshes (prismatic + Tetrahedral) based on grid convergence and local
particle deposition fractions in a bifurcating model of the respiratory tract. A grid convergence
index (GCI) has been implemented to assess the mesh-independence of solutions in cases where true
grid halving is not feasible. Structured hexahedral, unstructured hexahedral and prismatic
meshes were found to provide GCI values of approximately 5% and nearly identical velocity
fields. In contrast, the hexahedral–tetrahedral hybrid model resulted in GCI values that were
significantly higher in comparison to the other meshes. The resulting velocity field for the hybrid
configuration differed from the hexahedral and prismatic solutions by up to an order of magnitude
at some locations. Considering the deposition of 10 μm particles in the planar configuration, all
meshes considered provided relatively close agreement (2–20% difference) with an available
experimental study. For all particle sizes considered, local and total deposition results for the
structured and unstructured hexahedral meshes were similar. In contrast, the prismatic and hybrid
geometries resulted in significantly higher deposition rates when compared to the hexahedral
meshes for particles less than 10 μm. As a result, only the unstructured hexahedral mesh was found
to provide overall performance similar to the structured hexahedral configuration with the
advantage of a significant savings in construction time. These results emphasize the importance of
aligning control volume gridlines with the predominant flow direction in bio fluid applications that
involve long and thin internal flow domains.
13.4.1 Bifurcation Model, Boundary Conditions, and Contributions
The geometry selected to evaluate the mesh styles of interest is a double bifurcation model
representative of respiratory generations G3–G5 (see Error! Reference source not found.). This m
odel is generated from the ‘‘Physiologically Realistic Bifurcation’’ (PRB) geometry specified by
[Heistracher & Hofmann]521. For the PRB geometry, [Heistracher and Hofmann]522 provide a
complete mathematical description of a single symmetric or asymmetric bifurcation based on a set
of 11 geometric parameters and two sigmoid functions. Specific parameters for the double
bifurcation model of generations G3–G5 employed in this study are identical to the values used in the
work of [Heistracher and Hofmann] and the localized particle deposition measurements of [Oldham
et al.]523. The inlet diameter of G3 in the model is 0.56 cm. Further geometric details of this
configuration have been reported in [Longest and Vinchurkar]524. In this study, grid convergence,
velocity fields, and local particle deposition profiles will be evaluated for an in-plane configuration,
as implemented in the experimental study of [Oldham et al.]. For comparison, local deposition
519 Samir Vinchurkar, P. Worth Longest, “Evaluation of hexahedral, prismatic and hybrid mesh styles for
simulating respiratory aerosol dynamics”, Computers & Fluids, 2008.
520 See Previous.
521 Heistracher T, Hofmann W. Physiologically realistic models of bronchial airway bifurcations. J Aerosol Sci
1995;26:497–509.
522 See Previous.
523 Oldham MJ, Phalen RF, Heistracher T. Computational fluid dynamic predictions and experimental results for
airway models with comparisons to experimental data. Med Eng Phys 2007;29:350–66.
311
patterns will also be considered in an out-of-plane model where the second bifurcation has been
rotated by an angle of 90 degrees.
The steady inspiratory flow rate employed in the PRB model results in an inlet Reynolds number of
1788. For respiratory generations G3–G5, this is consistent with an inhalation flow rate in the trachea
of 60 l/min and represents a state of heavy exertion. The flow rate in generation G3 is 125 ml/s, as
specified in the experimental study of [Oldham et al].
Both inlet velocity and initial particle profiles are expected to have a significant impact on the flow
field and particle deposition locations. For comparisons to in vitro deposition data, these profiles may
be largely influenced by upstream effects in the experimental particle generation system. [Longest
and Vinchurkar] have shown that upstream transition to turbulence results in a relatively blunt
initial velocity field and particle profile at the model inlet. However, the flow within the PRB model
can be approximated as laminar. As such, fully-developed blunt turbulent profiles of velocity and
initial particle distributions have been assumed at the model inlet. Within the model, laminar flow is
assumed. Outlet flow is assumed to be evenly divided between the left and right symmetric branches,
i.e., homogeneous ventilation. Gravity has been included in the flow field and particle trajectory
calculations of the PRB model with the gravity vector oriented in the negative z-direction, i.e., normal
to the plane of the bifurcation, to remain consistent with the experiments of [Oldham et al].
Figure 13.9 Geometric Blocking Used (a) Structured Hexahedral (178 Blocks) and (b) Unstructured
Hexahedral (80 Blocks) – (Courtesy of Samir Vinchurkar & Worth Longest)
13.4.2.1 Structured
The structured base case mesh consists of six-sided hexahedral elements arranged in a system of
interconnected rectangular blocks. The blocks have been arranged in a butterfly blocking design
312
which minimizes control volume distortions while aligning a higher percentage of elements with the
local flow direction (Figure 13.9 a). Moreover, mesh density is increased near the wall and near the
bifurcation points. This multi-block structure is difficult to develop because gridlines may be
distorted, but must remain continuous throughout the geometry. Designing a high quality block-
structured meshing configuration for a geometry with multiple branches in which hexahedral
elements largely align with streamlines is a user intensive non-trivial task.
13.4.2.2 Unstructured
As with the structured mesh, the unstructured hexahedral configuration requires the creation of
sub-blocks within the geometry. However, the unstructured hexahedral design allow for two faces
on each block to have a non-continuous grid (Figure 13.10 b). Furthermore, blocks with one pair of
triangular faces may be accommodated. As a result, the planes forming these blocks may pass entirely
through the geometry (Figure 13.9 b). These planes are much easier to construct than the planes in
Figure 13.10 Four Meshing Styles of the PRB Model (a) Structured Hexahedral, (b) Unstructured
Hexahedral, (c) Prismatic, and (d) Hybrid – (Courtesy of Samir Vinchurkar & Worth Longest)
the structured hexahedral configuration that only partially bisect the geometry. In addition, the
blocking structure for the unstructured hexahedral mesh reduces the number of required blocks by
313
over 50% (Figure 13.9). Once the geometry is divided into the required blocks, non-continuous
meshes are created on cross-sectional surfaces. These meshed faces are then swept through the
geometry in the axial direction to generate the volumetric mesh. As a result, this mesh style retains
the advantage of aligning mesh elements in the predominate direction of flow.
The prismatic mesh consists of five-sided elements which are composed of two triangles joined
together by a longitudinal section of three rectangular faces. Generation of this mesh style requires
four-sided faces to be constructed on the surface of the PRB (Figure 13.10 c). The prismatic
elements are arranged such that their triangular faces fill the axial slices (Figure 13.10 c). This
allows for the rectangular sections of each prismatic element to be aligned with the direction of
predominate flow.
In order to improve the accuracy of the tetrahedral mesh style, an unstructured hexahedral–
tetrahedral hybrid mesh has been created (Figure 13.10 d). As with the prismatic mesh, four-
sided faces are required on the surface of the PRB geometry. These faces are used to construct
structured quadrilateral surface meshes, which form the basis for a layer of near-wall hexahedral
cells. The hexahedral elements are intended to better resolve the flow field near the walls where
velocity gradients are typically highest. The inner core of the flow field is then meshed with randomly
oriented tetrahedral elements. A layer of prismatic elements is used to join the hexahedral and
tetrahedral cells. In this configuration, the thin near-wall layer of hexahedral elements is aligned
with the predominate direction of flow. However, it is not possible for the randomly oriented
tetrahedral elements, which comprise a majority of the flow field, to be aligned with the axial flow
direction525.
13.4.3 Governing Equations
Flow conditions in the meshes considered are assumed to be isothermal, incompressible, laminar
and steady. Furthermore, the particle concentrations are assumed to be sufficiently dilute such that
momentum coupling effects of the dispersed phase on the fluid can be neglected, i.e., a one way
coupled flow. The governing equations for the respiratory airflow of interest include the
conservation of mass and momentum as:
∂𝐮 1
∇. 𝐮 = 0 , + (𝐮. ∇)𝐮 = (−∇p + ∇. 𝛕)
∂t ρ
Eq. 13.5
Where u is the velocity vector, p is the pressure, ρ is the fluid density, τ the shear stress tensor is
given by
𝛕 = μ[∇𝐮 + (∇𝐮)T ]
Eq. 13.6
and μ is the absolute viscosity. Hydrodynamic inlet and boundary conditions, in addition to the no-
slip wall condition, were selected to match the experimental conditions of interest. To approximate
a uniform outflow distribution, equally divided mass flow was specified. Furthermore, flow field
outlets were extended far downstream such that the velocity was normal to the outlet plane, i.e., fully
developed flow profiles with no significant radial velocity component. One-way coupled trajectories
of monodisperse 1–10 μm aerosols have been calculated on a Lagrangian basis by integration of an
appropriate version of the particle trajectory equation for comparison to the experimental results of
[Oldham et al.]. Characteristics of the 1–10 μm aerosols of interest within this model include a particle
density ρp = 1.06 g/cm3, a density ratio α = ρ/ρp ≈ 10-3, a Stokes number St =ρpd2p CCU/18μD ranging
525 The hybrid style consists of tetrahedral elements throughout the interior surrounded by three layers of
hexahedral control volumes on the surface. The internal block divisions have been shown in the cross-sectional
slices of the structured and unstructured hexahedral meshes.
314
from 0.003 to 0.26, and a particle Reynolds number Rep = ρ|u -v| dp/μ ≤ 10. The appropriate equations
for spherical particle motion under the conditions of interest are expressed as
CD Rep Rep a2 a3
f= = (a1 + + 2)
24 24 Rep Rep
Eq. 13.8
where the ai coefficients are constant for smooth spherical particles over the range of Reynolds
number considered, i.e. 0 ≤ Rep ≤ 10. The effect of the lubrication force, or near-wall drag
modifications, are shown but are expected to be reduced for the aerosol system of interest in
comparison to liquid flows due to near-wall non-continuum effects. As such, this term has been
neglected for the simulations considered here. Due to the significant size of the particles considered
and the dilute concentrations, Brownian motion and particle-to-particle collision effects have been
neglected. The Cunningham correction factor has only been applied for 1 lm aerosols based on the
expression of [Allen and Raabe]527. Inlet particle profiles have been specified to be consistent with
the local mass flow rate associated with the blunt velocity profile considered. That is, the mass flow
rate of particles on a finite ring, m p,ring, at the inlet is given by
r2
ṁp,ring ~ṁring = ∫ ρu(r)2πdr
r1
Eq. 13.9
where r1 and r2 define the extent of the ring and u(r) is the inlet velocity profile. Initial particle
velocities were assumed to match the local fluid velocities. Further details describing the
specification of initial particle profiles are discussed in [Longest and Vinchurkar]528.
13.4.4 Numeric Method
To solve the governing mass and momentum conservation equations in each of the geometries and
for each mesh style, the CFD package Fluent 6.2 has been employed. User-supplied FORTRAN and C
programs have been employed for the calculation of initial particle profiles, particle deposition
locations, grid convergence, and post-processing. All transport equations were discretized to be at
526 Morsi SA, Alexander AJ. An investigation of particle trajectories in two-phase flow systems. J Fluid Mech
1972;55(2):193–208.
527 Allen MD, Raabe OG. Slip correction measurements of spherical solid aerosol particles in an improved Millikan
airway models with comparisons to experimental data. Med Eng. Phys 2007;29:350–66.
315
least second order accurate in space. For the convective terms, a second order upwind scheme was
used to interpolate values from cell centers to nodes. The diffusion terms were discretized using
central differences. To improve the computation of gradients for the tetrahedral elements of the
hybrid mesh, face values were computed as weighted averages of values at nodes, which provides an
improvement to using cell-centered values for these meshes. Nodal values for the computation of
gradients were constructed from the weighted average of the surrounding cells, following the
approach proposed by [Rauch et al.]529. A segregated implicit solver was employed to evaluate the
resulting linear system of equations. This solver uses the Gauss–Seidel method in conjunction with
an algebraic multigrid approach to solve the linearized equations. The SIMPLEC algorithm was
employed to evaluate pressure–velocity coupling. The outer iteration procedure was stopped when
the global mass residual had been reduced from its original value by five orders of magnitude and
when the residual-reduction-rates for both mass and momentum were sufficiently small.
To ensure that a converged solution had been reached, residual and reduction-rate factors were
decreased by an additional order of magnitude and the results were compared. The stricter
convergence criteria produced a negligible effect on both velocity and particle deposition fields. To
improve accuracy, CGS units were employed, and all calculations were performed in double
precision. To further improve resolution in the particle deposition studies, geometries were scaled
by a factor of 10 and the appropriate non-dimensional parameters were matched. To determine grid
convergence and establish grid independence of the velocity field solutions, successive refinements
of each mesh style have been considered. For each refinement, grid convergence is evaluated using a
relative error measure of velocity magnitude between the coarse and fine solutions:
ui,coarse − ui,fine
εi = | |
ui,fine
Eq. 13.10
A vector of relative error values was determined for 1000 consistent points located in the region of
the bifurcation. The root-mean-square of the relative error vector was used to provide an initial
scalar measure of grid convergence for the points considered
1/2
∑1000 2
i=1 εi
εrms =( )
1000
Eq. 13.11
Rigorously, grid convergence measures should be based on refining the grid by a factor of two, i.e.,
grid halving. However, dividing hexahedral elements by a factor of two in three dimensions is often
not practical due to the significant increase in the number of control volumes. As such, relative error
values must be adjusted to account for cases in which grid reduction factors less than r = 2 are
employed. To extrapolate εrms values to conditions consistent with true grid halving, the Grid
Convergence Index (GCI) has been suggested by [Roache]530. This method is based on Richardson
extrapolation and can be applied as
εrms
GCI = Fs
rp − 1
Eq. 13.12
In the above equation, r represents the grid refinement factor and p is the order of the discretization
method. Based on second-order discretization of all terms in space, p = 2 for the systems of interest.
529 Rauch RD, Batira JT, Yang NTY. Spatial adaption procedures on unstructured meshes for accurate unsteady
aerodynamic flow computations. Technical Report AIAA-91-1106, 1991.
530 Roache P. Computational fluid dynamics. Albuquerque: Hermosa; 1992.
316
Refinement of the meshes was performed to maintain a constant reduction value in the three
coordinate directions. The associated r value has been calculated as the ratio of control volumes in
the fine and course meshes
Nfine 1/3
r=( )
Ncoarse
Eq. 13.13
To limit errors arising from the extrapolation procedure, r values of approximately 1.5 or greater
have been considered. A factor of safety FS equal to 3 has been selected to provide a GCI value equal
to the εrms value when r = 2 and p = 2. Therefore, the GCI value represents a scaled version of εrms to
account for mesh refinement factors less than 2. Particle trajectories were calculated within the
steady flow fields of interest as a post-processing step. The integration scheme employed to solve Eq.
13.7 was based on the trapezoid rule with a minimum of 10 integration steps in each control volume.
Doubling the number of integration steps within each control volume had a negligible (less than 1%)
effect on cumulative particle deposition values. Due to relatively small particle response times,
double precision calculations have been employed. It was found that approximately 20,000 particle
trajectories were required to produce convergent cumulative deposition values based on a 1%
relative error criterion. As such, 20,000 particles have been initialized in all deposition cases
considered.
13.4.5 Results
13.4.5.1 Validation Studies
Validations of velocity field values for the structured hexahedral mesh scheme applied to a
bifurcation geometry have been reported in a previous study. Briefly, a single bifurcation model was
considered with a characteristic Reynolds number of 518 and results were compared to the empirical
velocity field data of [Zhao and Lieber]531. For steady inhalation flow, the velocity field results of
[Longest and Vinchurkar] indicate good quantitative agreement with the empirical data of [Zhao and
Lieber].
13.4.5.2 Grid Convergence
To evaluate grid convergence for each mesh style considered, low, mid and high-resolution
comparisons between coarse and fine grids have been considered for the planar geometry. Results
of this comparison in the form of grid convergence values and required simulation times are reported
in Table 13.7 (a-d) and are discussed below. The reported grid convergence results are for the
planar bifurcation model. Similar grid convergence results were observed for the out-of-plane
configuration. The number of grid cells required is based on the presence of one symmetry plane, i.e.,
one-half of the geometry is meshed. As described, grid convergence has been based on comparisons
between coarse and fine grid solutions at 1000 points concentrated in the region of the bifurcation.
A layer of near-wall comparison points was positioned to be less than 5% of the internal radius away
from the wall. Selections of other sets of 1000 points as well as doubling the number of points
considered had a negligible (i.e., less than 1%) impact on the grid convergence values reported. For
the structured hexahedral mesh, successive grid refinements resulted in an effective reduction of εrms
values (Table 13.7a). For the high resolution case, an εrms of 1.99% was obtained. In comparison to
other relative error estimates, this value is relatively high. However, the selection of 1000 points with
many locations near the wall and in low velocity positions produces a very rigorous condition for
testing grid convergence. Moreover, errors on the order of 1% are expected to arise from the linear
interpolation algorithm used to calculate values at the positions of interest for comparisons of the
coarse and fine grid solutions. Therefore, achieving εrms values below 1% may not be possible with
531 Zhao Y, Lieber BB. Steady inspiratory flow in a model symmetrical bifurcation. J Biomech Eng 1994.
317
the rigorous grid convergence method employed. In this study, values of εrms on the order of
approximately 1% are considered to represent a well converged solution.
Accounting for the grid reduction factor used in the high resolution case results in a GCI value of
4.27% for the structured hexahedral mesh with 214 K control volumes. Grid convergence estimates
for the unstructured hexahedral mesh are reported in Table 13.7 b. These results are highly similar
to the grid convergence values observed for the base case. That is, an εrms value of 1.95% is achieved
for the high resolution case. However, the number of grid cells required to achieve this level of grid
convergence was increased from 214K for the structured hexahedral mesh to 318 K for the
unstructured hexahedral mesh. This increase in cell number resulted in a 10% increase in solution
time. Grid convergence index values on the order of 10%, as observed for the medium-level
resolution, are shown to result in visible differences between velocity profiles. For the high resolution
case, which is characterized by a GCI of 4.32%, differences in the velocity profiles are much less
discernable. For the prismatic mesh configuration, εrms and GCI values are similar to those observed
for the hexahedral style meshes (Table 13.7 c). However, to achieve this level of grid convergence,
grid resolution was increased by approximately 30–40% for each case considered. This increase in
grid density produces an associated increase in simulation time of approximately 20%. Furthermore,
it is observed that the medium-level resolution case of the unstructured prism mesh results in a GCI
value of approximately 6.6%, which is consistent with the high resolution prismatic case and
Table 13.7 Grid Convergence – (Courtesy of Samir Vinchurkar & Worth Longest)
318
significantly lower than with the medium resolution hexahedral meshes. Grid convergence values for
the hybrid meshes are significantly higher than values reported for the their configurations (Table
13.7 d). The minimum GCI value for the hybrid style was 17.7% and occurred medium-level grid
density.
For the high-level resolution condition, the GCI value increased to 21.3%. Further increases in grid
density resulted in a higher GCI value. This increase may be a result of round-off errors arising from
an over-resolved grid. Furthermore, this level of grid convergence is consistent with GCI values
observed for purely tetrahedral meshes with and without flow adaption. As a result, the hybrid mesh
style results in GCI values that are significantly higher than observed for the other meshes considered
in this study and appears to provide little advantage to purely tetrahedral style meshes. The higher
GCI values of the hybrid configuration may largely be a result of mesh elements not aligning with the
direction of predominate flow.
13.4.5.3 Velocity Fields
Velocity vectors, contours of velocity magnitude and streamlines of secondary motion are resented
in Figure 13.11 for the high resolution cases of the four mesh styles considered in the planar
bifurcation model. Midplane velocity fields appear highly similar among the hexahedral and
prismatic meshes (Figure 13.11 c). However the hybrid mesh results in a significant reduction in
midplane velocity gradients, which may arise from artificial or numerical dissipation (Figure 13.11
d). Similarly, secondary motions viewed at cross-sectional slice locations appear similar among the
first three mesh styles considered (Figure 13.11 c). A single vortex is observed for the upper half of
the geometry at Slice 1. The second carinal ridge produces a pair of counter rotating vortices for the
inner branch of G5, as observed in Slice 2 (Figure 13.11c). However, due to the highly dissipative
conditions of the hybrid mesh, only one fully formed vortex is observed in each of the three cross-
sectional planes considered (Figure 13.11 d). In summary, mid plane velocity vectors appear
relatively consistent among the four meshes considered, with some variations observed for the
hybrid configuration.
Secondary velocity profiles appear similar between the two hexahedral mesh styles. However,
secondary velocity profiles are significantly different for the non-hexahedral meshes with the largest
variations occurring for the hybrid configuration. In order to better evaluate differences among the
solutions of the meshes considered, mid plane velocity profiles have been plotted at Slices 1–3 for
high resolution conditions in the planar model. At each location, velocity profiles for the hexahedral
and prismatic meshes are similar. However, minor differences among the first three solutions are
discernable. This observation highlights the fact that a high level of grid convergence does not ensure
an exact match among solutions of different mesh types. In contrast to the hexahedral and prismatic
solutions, the hybrid configuration results in significantly different velocity profiles. Velocity values
for the hybrid solution again appear to be influenced by a high degree of dissipation. Considering
Slice 3, differences between the first three solutions and the hybrid configuration vary between
approximately 30% to one order of magnitude.
13.4.5.4 Particle Deposition
Deposition locations for the four mesh styles considered and the planar geometry with 10 μm
particles are shown in Figure 13.12. The 10 μm aerosols deposit primarily by impaction.
Qualitatively, the observed deposition locations are very similar between the structured and
unstructured hexahedral meshes. Furthermore, the hexahedral mesh styles exhibit very distinct
divisions between regions of deposition and areas devoid of particle–wall interactions. In contrast,
particle deposition locations for the prismatic and hybrid meshes appear more diffuse. This effect
may be the result of fewer mesh elements aligned with the flow, especially for the hybrid
configuration. Nevertheless, each of the mesh styles considered emphasizes local accumulations of
particles, referred to as hotspots, occurring just upstream of the bifurcation points and continuing
downstream for approximately one-half the branch lengths.
319
For 10 μm particles, the structured hexahedral, unstructured hexahedral and prismatic high
resolution meshes all match the experimental data of [Oldham et al]. For these three solutions,
variations from the cumulative particle deposition experimental data are within 2–3%. Furthermore,
these solutions result in a final deposition fraction that is within approximately 1% of the
experimentally reported value of 81%. Differences in cumulative deposition values among the
solutions for the hexahedral and prismatic meshes vary by less than 1%. In contrast, cumulative
Figure 13.11 Velocity Vectors (a) Structured Hexahedral Mesh with 214 K C.V. (b) Unstructured
Hexahedral Mesh with 318 K, C. V. (c) Prismatic Mesh with 510K C. V, (d) Hybrid Mesh with 608 K C.
V. – (Courtesy of Samir Vinchurkar & Worth Longest)
deposition results for the high-resolution hybrid mesh and 10 μm particles are significantly lower
320
than the experimental data. The hybrid mesh considered is observed to under-predict cumulative
deposition by approximately 20%. (see [Samir Vinchurkar & Worth Longest]532).
As particle size decreases, larger differences are observed among the cumulative deposition
predictions for the mesh styles considered. For 5 μm aerosols, the structured and unstructured
hexahedral meshes are in close agreement with a final deposition fraction between 5% and 6%. In
contrast, the prismatic mesh predicts a cumulative deposition of 11%, which is approximately double
the hexahedral mesh estimates. Results for the hybrid mesh and 5 μm particles are even higher, with
a total deposition fraction of 12%. Considering 3 μm particles, close agreement is observed between
the hexahedral mesh predictions with a total deposition fraction of 0.3%. In contrast, the prismatic
and hybrid configurations predict a deposition rate of approximately 1.8%. A similar trend is
observed for 1 μm aerosols. Again, results for the structured and unstructured hexahedral
configurations are in close agreement with a total deposition fraction ranging between 0.12% and
0.17%. However, predictions of the prismatic and hybrid meshes are significantly higher by a factor
Figure 13.12 Deposition Locations for 10 lm Particles in the Planar Geometry for the (a) Structured
Hexahedral Mesh, (b) Unstructured Hexahedral Mesh, (c) Prismatic Mesh, and (d) Hybrid Mesh –
(Courtesy of Samir Vinchurkar & Worth Longest)
of approximately five.
532Samir Vinchurkar, P. Worth Longest, “Evaluation of hexahedral, prismatic and hybrid mesh styles for
simulating respiratory aerosol dynamics”, Computers & Fluids, 2008.
321
In general, cumulative deposition results are consistent between the structured and unstructured
hexahedral meshes for the planar geometry. Results for the prismatic and hybrid meshes differ from
the hexahedral results by values ranging from 20% (10 μm) to a factor of five (1 μm). Deposition
predictions of the prismatic and hybrid meshes are also generally higher than for the hexahedral
models. Differences in deposition results between the hexahedral and prism/hybrid meshes appears
to increase with decreasing particle size. For all particle sizes considered in respiratory generations
G3–G5, impaction is the primary deposition mechanism. However, the smaller particles considered
have less inertia and are influenced to a greater extent by the secondary velocity patterns. Significant
differences in secondary velocity profiles were observed between the hexahedral and other mesh
styles considered in Figure 13.11. Therefore, it is concluded that differences in secondary motion
patterns associated with mesh style are partially responsible for increased differences in deposition
patterns as particle size is reduced.
Furthermore, increase in secondary motion associated with out of plane bifurcations may induce
additional discrepancies among the models considered. Cumulative deposition results for the out-of-
plane geometry and particle sizes of 3 and 10 μm are shown in533. As with the planar geometry for 10
μm particles, close agreement is observed between the hexahedral and prismatic mesh
configurations with a total deposition rate of approximately 90%. The hybrid mesh results in an 85%
deposition value, which is in relatively close agreement with the other mesh styles considered.
However, significant differences in model predictions are again observed as the particle size is
decreased. For 3 μm aerosols, results for the structured and unstructured hexahedral meshes appear
to be in close agreement with a total deposition rate of approximately 1.8%. Deposition results for
the prismatic and hybrid meshes are approximately six times higher than the other model predictions
with a total deposition fraction of 11%.
13.4.6 Discussion
In this study, the effects of mesh style have been evaluated with respect to grid convergence, velocity
fields and particle deposition values in a double bifurcation model of the respiratory tract. Mesh
styles considered include structured hexahedral, unstructured hexahedral, prismatic and hybrid
configurations. Particles ranging from 1 to 10 μm have been evaluated in planar and out-of-plane
geometries. Deposition results for 10 μm particles in the planar geometry were found to be in close
agreement with the experimental deposition data of [Oldham et al.] on a highly localized basis. In
general, grid convergence, velocity fields, and local particle deposition values were consistent
between the structured and unstructured hexahedral meshes. Both hexahedral meshes considered
resulted in GCI values of approximately 5% and nearly identical midplane and secondary velocity
patterns.
Furthermore, local particle deposition profiles were largely similar for the hexahedral meshes across
the range of particle sizes evaluated. Considering the prismatic mesh, GCI values were comparable to
the hexahedral configuration with only a moderate increase in control volume number. Prismatic
velocity fields were consistent with the hexahedral results, with some minor variations in the
secondary velocity profiles. However, the prismatic mesh resulted in significant differences in local
deposition profiles for particles less than 10 μm. The hybrid mesh resulted in a GCI value that was
significantly higher than observed for the other meshes. This increase in GCI occurred despite a
significant increase in the number of cells in the hybrid mesh. The velocity field for the hybrid
configuration differed from the hexahedral and prismatic solutions by up to an order of magnitude
at some locations with significant differences in the secondary vortex patterns. Moreover, deposition
results for the hybrid mesh differed from the hexahedral results by values ranging from 20% (10 μm)
533Samir Vinchurkar, P. Worth Longest, “Evaluation of hexahedral, prismatic and hybrid mesh styles for
simulating respiratory aerosol dynamics”, Computers & Fluids, 2008.
322
to a factor of five (1μm). For the out-of-plane bifurcating geometry, local deposition results were
generally consistent for 10 μm aerosols, but differed significantly for 3 μm particles among the mesh
styles considered.
This study highlights the effects of mesh style on grid convergence and related solution variables for
an internal biofluid flow field. For any CFD problem, the required quality of the solution is often
weighted against the time and resources available for mesh development. Structured hexahedral
meshes are often thought to provide the highest quality solution, but the associated mesh
construction time may be prohibitively expensive. In this study, structured hexahedral and
unstructured hexahedral mesh schemes have been shown to provide highly comparable grid
convergence values, velocity fields and particle deposition profiles.
Moreover, both of these mesh styles predicted deposition results in very close agreement with
experimental data for 10 μm aerosols in a planar geometry. As illustrated in Figure 13.10
construction of the unstructured hexahedral mesh requires a less complex blocking schemes than for
the structured hexahedral configuration. For example, construction of the structured hexahedral
mesh requires the creation of 178 blocks in comparison to 80 blocks for the unstructured hexahedral
mesh. Therefore, the unstructured hexahedral mesh offers a significant savings in construction time
without an appreciable loss in solution performance. Compared with the purely tetrahedral meshes
considered in [Longest and Vinchurkar], the hybrid mesh employed in this study showed no
improvement in performance.
Construction of the hybrid mesh did require subdivision of the PRB surface geometry into
rectangular faces. In contrast, construction of purely tetrahedral meshes does no require subdividing
the surface into rectangular faces. As a result, purely tetrahedral and flow adaptive tetrahedral
meshes may be advantageous in comparison to the hybrid mesh considered in this study.
Furthermore, the use of tetrahedral meshes may be preferred when rapid approximate solutions are
the top priority. This scenario may arise for patient-specific modeling in the clinical setting. That is,
approximate solutions with rapidly generated tetrahedral meshes may be necessary in order to make
true patient-specific modeling a reality in the clinical setting .
13.4.6.1 Advantages of Hexahedral Structured Mesh
In this study, hexahedral and prismatic meshes were found to provide adequate grid convergence
and similar velocity fields. For particle deposition, hexahedral mesh configurations appear to
provide the best solution. The observed better performance of the hexahedral and prismatic meshes
in comparison to the hybrid mesh may occur for two reasons:
First, both hexahedral and prismatic meshes can be aligned with the predominate direction of flow.
This alignment is reported to reduce numerical diffusion errors. Furthermore, discretization errors
partially cancel on opposite hexahedral faces. In contrast, mainly tetrahedral meshes cannot be
aligned with the direction of predominate flow, thereby increasing the potential for numerical
diffusion. Therefore, numerical diffusion errors associated with randomly oriented tetrahedral faces
are one likely cause of the higher grid convergence values observed for these meshes. The occurrence
of these errors is enhanced in the unidirectional flow system considered.
The second possible factor responsible for the improved performance of the hexahedral solutions is
the use of higher order elements. The hexahedral elements implemented provide more nodes per
face for improved predictions of flux values and particle tracking. Some commercial CFD packages
provide an increased number of nodes per face to account for this problem. However, the effect of
increasing the number of nodes per face has not been quantified for internal biofluid flows.
Furthermore, the effects of nodes per face on solution performance is expected to be a secondary
factor in comparison to aligning the grid with the predominate direction of flow in the long and thin
conduits of interest.
Limitations of the current study include calculation of the GCI parameter at linearly interpolated
points, the evaluation of a single software package, and the construction of only one style of hybrid
323
mesh. The grid convergence parameter was evaluated at 1000 representative points throughout the
flow field. These points include near-wall locations where minor variations in flow field velocities
can result in very large relative errors. Modifying the number and location of these randomly selected
points did not appreciably change the GCI value provided at least 1000 points were included.
However, interpolation errors are present in determining values at comparison points. These errors
are estimated to be on the order of approximately 1%. Nevertheless, the grid convergence algorithm
employed provided an effective strategy for evaluating relative performance among the mesh styles
considered that includes low velocity and near-wall regions.
In this study, only one commercial software package was evaluated. Other software may improve the
solution quality of the hybrid configuration. Moreover, many other hybrid mesh styles are possible.
Nevertheless, evaluation of a representative state-of-the-art commercial software provides a
valuable basis of comparison for various styles of meshes. Furthermore, this study highlights the
advantages of aligning mesh elements with the predominate direction of flow, which is
independent of the computational package considered.
13.4.7 Conclusion
In deduction, structured and unstructured hexahedral meshes have been shown to provide
acceptable grid convergence values, comparable velocity fields and good agreement with
experimental 10 μm particle deposition data in a branching respiratory geometry. Generation of the
unstructured hexahedral mesh provided a significant time savings in pre-processing with an
associated minimal increase in computational run time. In contrast, a hybrid mesh configuration of
tetrahedral cells surrounded by multiple layers of near-wall hexahedral elements resulted in
significantly higher grid convergence values and different velocity and particle deposition
results. These findings emphasize the importance of aligning control volume gridlines with the
predominate direction of flow and using higher order elements in biofluid applications with long and
thin conduits. Future work is needed to better assess modified flux interpolation schemes, other
hybrid configurations and the use of polyhedral elements. For further discussion, please refer to
[Samir Vinchurkar, P. Worth Longest]534.
534 Samir Vinchurkar, P. Worth Longest, “Evaluation of hexahedral, prismatic and hybrid mesh styles for
simulating respiratory aerosol dynamics”, Computers & Fluids, 2008.
535 Philippe Martineau Rousseau, Azzeddine Soulaïmani and Michel Sabourin, “Comparison between structured
hexahedral and hybrid tetrahedral meshes generated by commercial software for CFD hydraulic turbine analysis”,
Conference Paper, May 2013.
536 ANSYS ICEM CFD 13.0. Available from: http://www.ansys.com/Products/Other+Products/ANSYS+ICEM.
537 Pointwise 17.0 R1. Available from: http://www.pointwise.com/.
324
head loss and the meridian velocity at the symmetrical plane are used to show similarities between
the two meshing methodologies. An investigation of the small differences between the results is
made, utilizing the velocity and total pressure contours. These analyses indicate that these two
meshing methodologies achieve equivalent results for both spiral case geometries.
Figure 13.13 Boundary Layer Transition Between Prismatic and Volume Elements – (Courtesy of
Rousseau et al.)
325
13.5.2 Geometry
A turbine spiral case is the component before
the runner of a Francis hydraulic turbine.
More specifically, it begins after the penstock
and ends at the runner’s entrance, and it also
includes the stay vanes and the wicket gates.
Figure 13.14 shows a half domain used for
the calculation of a spiral case. The primary
function of the spiral casing is to rotate the
flow and distribute it equally to the runner [4].
The stay vanes and mostly the wicket gates
induce a direction to the flow at the entrance
of the runner. Ideally these functions are
carried out with minimum head loss and
evenly within each stay vane and wicket gate Figure 13.14 Example of a hydraulic turbine
Figure 13.15 spiral
Geometry of the
case (half Stay Vanes and
domain)
channel. For example, the effect of
Wicket Gates,
– (Courtesy of RousseauGeometry
Left: Geometry A, Right: et al.) B –
recirculation zones at the stay vanes could (Courtesy of Rousseau et al.)
affect the distribution and direction of the flow
at the runner. The behavior of the runner as
well as of the draft tube could thus be affected. In this paper, we consider two spiral case geometries
of the same hydraulic turbine and these are scaled at small model dimensions. Each spiral case has
24 stay vanes and wicket gates. Five different models of stay vanes are utilized in each spiral case.
These models are characterized by their different cord lengths, which decrease from the entrance to
the end of the spiral case. The wicket gates are identical within the same spiral case.
The difference between the two spiral cases lies with the stay vanes and the wicket gates. The stay
vane leading edge incidence angle of the geometry B is better aligned with the flow and rounded.
The trailing edges of the stay vanes and wicket gates are tapered. Figure 13.15 shows the
differences between the two geometries with the same model of stay vanes and wicket gates. These
changes improve the flow in the geometry B by reducing the separation on the upper surface of the
stay vanes. This greatly eliminates the recirculation zones. Replacement of the chamfer by a rounded
leading edge on the stay vanes and refinement of the trailing edge of the wicket gates also decrease
wakes in the flow. The flow is generally more uniform in the geometry B. The total head loss through
the spiral case is also greatly reduced. The complexity of the flow and the different models of stay
vane in the spiral case prevent any periodic simplification of the computational domain. However, it
is simplified symmetrically in the horizontal plane. The absence of the runner after the exit of the
spiral case allows this simplification. The computation domain of the geometry A is shown in Figure
13.14. The exit of the spiral case is far from the trailing edge of the wicket gates to reduce the effect
of the outflow condition on the flow.
Geometry A Geometry B
Table 13.8 Mesh Densities for Structured Hexahedral and Hybrid Un-Structural Tetrahedral –
(Courtesy of Rousseau et al.)
538 Philippe Martineau Rousseau, Azzeddine Soulaïmani and Michel Sabourin, “Comparison between structured
hexahedral and hybrid tetrahedral meshes generated by commercial software for CFD hydraulic turbine analysis”,
Conference Paper, May 2013.
539 See Previous.
327
Figure 13.17 Hybrid Tetrahedral Medium Mesh on the Symmetric Surface of the Geometry A (left) &
Mesh in the wake of a Hydraulic Profile (wicket gates trailing edge)(right) – (Courtesy of Rousseau et al.)
540 Shur, M., et al., Comparative Numerical Testing of One- and Two-Equation Turbulence Models for Flows with
Separation and Reattachment, 33rd Aerospace Sciences Meeting and Exhibit, American Institute of Aeronautics
and Astronautics: Reno, NV, 1995.
541 Bardina, J.E., P.G. Huang, and T.J. Coakley, Turbulence Modeling Validation, Testing, and Development, NASA.
542 R. Munson, B., et al., Fundamentals of Fluid Mechanics. Sixth Edition ed2009, Hoboken, NJ: Wiley. 724.
328
13.5.5 Results
Analysis of the total head loss as a function of the radius provides details about the loss through the
spiral casing components, which in turn allows identification of the component that causes the
largest total head loss. This information is used to amend the problematic component, for example,
the stay vanes in the geometry A. Ultimately, the analysis helps in calculating the hydraulic efficiency
of the turbine. The meridian velocity on the symmetrical plane is also used to compare the two
meshes. It provides qualitative information on the flow. For example, it shows the effect of
recirculation zones or of an obtuse trailing edge. It also indicates if the hybrid tetrahedral mesh
overestimates the dissipation of the wake. The meridian velocity is measured at 10 % upstream of
the inlet radius of the stay vanes, between the end of the stay vanes and the beginning the guide
vanes, and at the average radius of the runner inlet.
The difference of the total cumulative loss between the two types of mesh is approximately 10% and
occurs predominantly near the trailing edge of the stay vanes and upstream of the leading edge of
the guide vanes. Figure 13.18 confirms that the total head loss difference originates at the end of
the
stay vanes. In fact, the hybrid tetrahedral mesh models a larger recirculation zone and thus a larger
wake. The better quality of the prism elements of the tetrahedral mesh on the wall could be the cause
for that larger wake. Furthermore, Figure 13.18 shows an effect of the second order advection
scheme (blend factor = 1) used in the geometry A by the non-physical total pressure augmentation
(red contour plot). This second order scheme could lead to local instabilities in cases of sudden flow
direction change or coarse meshes. The adaptive CFX advection scheme (high resolution) should
eliminates these instabilities but will induce more numerical diffusion in the presence of large
recirculation zones.
Figure 13.18 Relative Total Head Loss on the Meridian Plane for the Geometry A with fine mesh, left:
Structured Hexahedral, right: Hybrid Tetrahedral – (Courtesy of Rousseau et al.)
329
Figure 13.19 show the meridian velocity profiles for the geometry A with medium and fine meshes.
For both grids, the meridian velocity is almost identical at the entrance of the stay vanes. The slight
difference is due to the azimuthal change in the distribution of the flow. It should be noted that the
sudden jump in the meridian velocity corresponds to the end of the spiral case at an azimuthal
position of 40°. The wave pattern in the velocity profile shows the influence of the leading edge of
the stay vanes. The velocity profile in the gap between the stay vanes and the wicket gates is very
similar for all meshes. However, there remains a slight difference caused by the largest recirculation
zone in the hybrid tetrahedral mesh. The velocity profile at the entrance of the runner differs by its
faster dissipation of the wake and a gap in the velocity profile. The former tends to carry out a
smoothing of the velocity profile, and the latter results from the difference between the recirculation
zone in the hybrid tetrahedral mesh. The velocity profile at the entrance of the runner differs by its
faster dissipation of the wake and a gap in the velocity profile.
The former tends to carry out a smoothing of the velocity profile, and the latter results from the
difference between the recirculation zones of the two meshes. As noted on the velocity profile at the
entrance of the stay vanes, the recirculation zone differences slightly change the flow distribution in
the spiral case. These differences between the two meshes are slightly more pronounced with
increased refinement. In fact, the recirculation zone is larger with hybrid mesh. In contrast to the
geometry A, it appears that the evolution of the total head loss is very similar for both types of
meshes. Only the coarse hybrid tetrahedral mesh differs in the total head loss to the end of the stay
vanes and wicket gates. The too-rapid growth of the tetrahedral mesh at the trailing edge explains
this difference.
Figure 13.19 Meridian Velocity Near a Stay Vane with fine mesh for Geometry A, left: Structured
Hexahedral, right: Hybrid Tetrahedral – (Courtesy of Rousseau et al.)
330
For the geometry, only medium meshes are chosen due to the strong similarity of their assessment
of the total head loss. As expected, the meridian velocity at the entrance of the stay vanes is almost
identical for the two meshes. (see the [Roussea et al.]543). Similarly, the velocity is almost identical
between the stay vanes and the wicket gates. The meridian velocity at the runner entrance shows
that the hybrid tetrahedral mesh dissipates faster the wake. In fact, the extreme values caused by the
wicket gate’s wake are dispelled by this mesh. Figure 13.20 shows the overall effect of the
dissipation of the wake caused by the too-rapid growth of the tetrahedral mesh. However, a strong
similarity of the flow is observed between the two mesh types.
Figure 13.20 Meridian Velocity on the Meridian Plane for the Geometry B – (Courtesy of Rousseau et al.)
13.5.6 Conclusion
The comparison between the structured hexahedral and hybrid tetrahedral meshes in the complex
geometry of a hydraulic turbine spiral case gives an advantage to the latter. In fact, Pointwise
software eliminates many defects inherent to hybrid tetrahedral mesh, such as inadequate definition
of hydraulic profiles and poor transition between prismatic and volume elements. This mesh also
leads to higher quality elements near the walls. Furthermore, a significant savings in turnaround time
is obtained for the mesh construction compared to the hexahedral mesh. Typically, the mesh design
time is reduced between five and ten times with the construction of a hybrid tetrahedral mesh. The
results show a great similarity of the flow for the two meshes in the two geometries. However, the
flow in the geometry A differs in the recirculation zones in the upper surface of the stay vanes. The
hybrid tetrahedral mesh models a larger recirculation zone than that of the structured hexahedral.
543Philippe Martineau Rousseau, Azzeddine Soulaïmani and Michel Sabourin, “Comparison between structured
hexahedral and hybrid tetrahedral meshes generated by commercial software for CFD hydraulic turbine analysis”,
Conference Paper, May 2013.
331
The higher quality of the prismatic elements on the wall could be one cause. This difference modifies
the evolution of the total head loss and slightly alters the meridian velocity profile. In addition to
these differences, there is also a slightly faster dissipation of the wake downstream of hydraulic
profiles in the hybrid tetrahedral mesh. The too-rapid growth of tetrahedral elements is the main
cause. However, this disadvantage could be reduced with a finer mesh (and associated computational
resources). In the absence of detailed experimental results about the flow in the spiral case, it is not
possible to conclude on the accuracy of each mesh.
In regard to the geometry B, the flow and the evolution of losses are virtually identical. As in the
geometry A, there is only a slightly larger dissipation of the wakes due to the rapid expansion of the
size of tetrahedral mesh elements at the trailing edge. Finally, the application of two types of mesh in
both geometries shows similar results in terms of hydraulic performance. However, it is interesting
to note that the evolution of the total head loss function of the radius in the spiral case shows that the
hexahedral mesh requires fewer nodes than the tetrahedral hybrid mesh to achieve mesh
independence. This can be an important factor when the available computing power is limited or the
number of licenses for commercial software becomes an issue. The application of the hybrid
tetrahedral mesh is currently being used for the calculation of the entire flooded parts of a Francis
hydraulic turbine. That study should allow demonstrating the validity of this meshing method
compared with experimental results.
332
Each technique has its own unique characteristics. For example, the Direct Differentiation, used
here, has the advantage of being exact, due to direct differentiation of governing equations with
respect to design parameters, but limited in scope. By far, the most used sensitivity analysis, is
Adjoint Variables techniques, especially for aerodynamic optimization. Due to apparent popularity,
we consider these in more details.
and is specific for each design variable (DV). It is obvious that the cost quickly becomes
unmanageable545.
𝐑( 𝐗 (𝐏), 𝐏) = 0
Eq. 14.1
Where the explicit dependency of R on grid and vector of parameters P is evident. The parameters P
control the grid X. Using chain rule of differentiation
∂𝐑 ∂𝐗 ∂𝐑
[ ][ ] + [ ] = 0
∂𝐗 ∂𝐏 ∂𝐏
Eq. 14.2
Further simplification could include the vector of grid sensitivity which is
X X XB
=
P XB P
Eq. 14.3
Where XB denotes the boundary nodes546.
n N i,p (r) ωi
X (r) = R i,p (r) Di i = 0,........., n R i,p (r) = n
i =0
N
i =0
i, p (r) ωi
Eq. 14.4
where X (r) is the vector surface coordinate in the r-direction, Di are the control points (forming a
control polygon), ωi are weights, Ni,p (r) are the p-th degree B-Spline basis function, and Ri,p(r) are
545 Gabriele Luigi Mura, “Mesh Sensitivity Investigation in the Discrete Adjoint Framework”, Thesis submitted to
University of Sheffield in partial fulfilment of the requirement for the degree of Doctor of Philosophy, 2017.
546 Sadrehaghighi, I., Smith, R.E., Tiwari, S., N., “Grid Sensitivity and Aerodynamic Optimization Of Generic Airfoils”,
Figure 14.1 B-Spline Approximation of NACA0012 (left) and RAE2822 (right) Airfoils
Gaetan K.W. Kenway, Joaquim R. R. A. Martins, and Graeme J. Kennedy, “Aero structural optimization of the
548
Therefore, it can be solved safely using techniques such as Gaussian elimination without pivoting.
The procedure can be easily extended to cross-sectional configurations, when critical cross-sections
are denoted by several circular conic sections, and the intermediate surfaces have been generated
using linear interpolation. Increasing the weights would deform the circular segments to other conic
segments (elliptic, parabolic, etc.) as desired for different flight regions. In this manner, the number
of design parameters can be kept to a minimum, which is an important factor in reducing costs. An
efficient gradient-based algorithm for aerodynamic shape optimization is presented by [Hicken and
Zingg]549 where to integrate geometry parameterization and mesh movement. The generalized B-
spline volumes are used to parameterize both the surface and volume mesh. Volume mesh of B-spline
control points mimics a coarse mesh where a linear elasticity mesh-movement algorithm is applied
directly to this coarse mesh and the fine mesh is regenerated algebraically. Using this approach,
mesh-movement time is reduced by two to three orders of magnitude relative to a node-based
movement.
Figure 14.3 Free Form Deformation (FFD) for Volume Grid with Control Points (Courtesy of
Kenway et al.)
14.2.1.1 Case Study - 2D Study of Airfoil Grid Sensitivity via Direct Differentiation (DD)550
The structured grid sensitivity of a generic airfoil with respect to design parameters using the NURBS
parameterization is discussed. The geometry, as shown in Figure 14.2, has six pre-specified control
points. The control points are numbered counter-clockwise, starting and ending with control points
(0 and 5), assigned to the tail of the airfoil. A total of 18 design parameters (i.e., three design
parameters per control point) available for optimization purpose. Depending on desired accuracy
and degree of freedom for optimization, the number of design parameters could be reduced for each
particular problem. For the present case, such reduction is achieved by considering fixed weights
and chord-length. Out of the remaining four control points with two degrees of freedom for each,
549 Jason E. Hickenand, David W. Zingg, “Aerodynamic Optimization Algorithm with Integrated Geometry
Parameterization and Mesh Movement”, AIAA Journal Vol. 48, No. 2, February 2010.
550 Sadrehaghighi, I., Smith, R.E., Tiwari, S., N., “Grid Sensitivity and Aerodynamic Optimization Of Generic Airfoils”,
control points 1 and 5 have been chosen as a case study. The number of design parameters is now
reduced to four with XD = {X1, Y1, X5, Y5}T, with initial values specified in Figure 14.2. Non-zero
contribution to the surface grid
sensitivity coefficients of these control
points are the basis functions R1,3(r)
and R5,3(r). The sensitivity gradients
are restricted only to the region
influenced by the elected control
point.
This locality feature of the NURBS
parameterization makes it a desirable
tool for complex design and
optimization when only a local
perturbation of the geometry is
warranted. Similar results can be
obtained for design control point 5
where the sensitivity gradients are
restricted to the lower portion of
domain. Figure 14.4 shows C-type
dual blocks structured grid and its
sensitivity with respect to NURBS
input for different design control
points.
551 Jiaqi Luo, Feng Liu, “Performance Impact Of Manufacturing Tolerances for a Turbine
Blade Using Second Order
Sensitivities”, Proceedings of ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition,
GT2018, June 11-15, 2018, Oslo, Norway.
337
δ𝐑 ∂𝐑 δ𝐰 ∂𝐑
= + =0
δ𝐱 𝐢 ∂𝐰 δ𝐱 i ∂𝐱 𝐢
Eq. 14.7
By introducing a series of co-state variables, Ψ and subtracting the product of ΨT and Eq. 14.6 from
Eq. 14.7, we can get
δ𝐈 ∂𝐈 ∂𝐑 δ𝐰 ∂𝐈 ∂𝐑
={ − ψT } + { − ψT }
δ𝐱 i ⏟∂𝐰 ∂𝐰 δ𝐱 𝐢 ∂𝐱 𝐢 ∂𝐱 i
0
Eq. 14.8
The crucial issue of the adjoint method is to eliminate the effects of δw on δI to avoid the calculation
of δw due to the change of aerodynamic shape. It can be achieved if the adjoint operator Ψ satisfies
the adjoint equations, therefore the first term in Eq. 14.8 is set to zero. Then the sensitivities can be
determined by
δ𝐈 ∂𝐈 ∂𝐑
= − ψT
δ𝐱 i ∂𝐱 𝐢 ∂𝐱 i
Eq. 14.9
Once the flow solutions and adjoint solutions are obtained by solving the governing flow equations
and the adjoint equations, respectively, the complete sensitivities can be calculated by deforming the
aerodynamic shape and thus the grid for each geometric parameter. Considering that the evaluation
of an aerodynamic objective function involves perturbing the grid, it is to be expected that the
sensitivities of the objective function to the design variables will in some way involve sensitivities of
the grid perturbation algorithm. When evaluating the gradient using the discrete adjoint method,
these mesh sensitivities are implicitly included in the terms ∂I/∂xi and ∂R/∂xi .552 Table show the pro
Table 14.1 Pros & Cons of Different Grid Sensitivity Method (NDV = Number of Design Variable)
552Chad Oldfield, “An Adjoint Method Augmented with Grid Sensitivities for Aerodynamic Optimization”, A thesis
submitted in conformity with the requirements for the degree of M.A.Sc. Graduate Department of Aerospace
Engineering University of Toronto, 2006.
338
and cons of different mesh sensitivity routine as envisioned by [Gabriele Luigi Mura]553.
14.3.1 Case Study - Using An Adjoint Approach to Eliminate Mesh Sensitivities in Computational
Design
Authors : Eric J. Nielsen and Michael A. Park
Title : Using An Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
Source : https://ntrs.nasa.gov/search.jsp?R=20080015430 2019-12-29T19:30:31+00:00Z
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design
framework is introduced. The method is based on an adjoint approach and eliminates the need for
explicit linearization of the mesh movement scheme with respect to the geometric parameterization
variables. An expense that has hindered practical large-scale design optimization using discrete
adjoint methods [Nielsen and Park]554. The effects of the mesh sensitivities can be accounted for
through the solution of an adjoint problem equivalent in cost to a single mesh movement
computation, followed by an explicit matrix–vector product scaling with the number of design
variables and the resolution of the parameterized surface grid. The accuracy of the implementation
is established and dramatic computational savings obtained using the new approach are
demonstrated using several test cases. Sample design optimizations are also shown.
14.3.1.1 Introduction
In recent years a concerted effort has been made to bring higher fidelity, physics-based
computational fluid dynamics (CFD) simulations into the aircraft design process. Such tools have
typically been used to perform validations of designs derived through the use of lower fidelity
methods, or in some cases, used in heuristic design methods that rely heavily on experience.
However, these advanced CFD tools are now routinely targeted as primary components of automated
optimization frameworks.
In the field of gradient-based design, one challenge in utilizing solvers based on the Euler or
Reynolds-averaged Navier-Stokes equations has been an efficient and accurate method for
computing the sensitivity derivatives required by many optimization algorithms. Approaches such
as finite differencing [1-3], direct differentiation,[4-10] and the complex variable method [11-16] can
be used for calculating these derivatives; however, their cost scales directly with the number of
design variables. For typical aerodynamic design problems where this value may be on the order of
tens to hundreds, this limitation precludes the use of such methods.
To alleviate the computational burden associated with problems containing many design variables,
recent work has focused on the use of adjoint methods [17-35]. Adjoint methods may be
implemented in either a continuous or discrete context, depending on the order in which the
differentiation and discretization operations are performed. The relative merits of the two
approaches are the subject of much debate in the literature; one such difference is the focus of the
current work.
Both the continuous and discrete adjoint approaches introduce an auxiliary, or adjoint, variable that
is determined through the solution of an additional linear system of equations. Using the same
solution procedure as is used for the governing equations, asymptotic convergence rates of both
systems are similar, as the eigenvalues of the two systems are the same. Moreover, for the discrete
adjoint variant, if the solution algorithm itself is constructed in a manner which is discretely adjoint
to the baseline scheme, the asymptotic convergence rates are guaranteed to be identical [19,20,31].
553 Gabriele Luigi Mura, “ Mesh Sensitivity Investigation in the Discrete Adjoint Framework”, Thesis is submitted
to University of Sheffield in partial fulfilment of the requirement for the degree of Doctor of Philosophy, 2017.
554 Eric J. Nielsen and Michael A. Park, “Using An Adjoint Approach to Eliminate Mesh Sensitivities in
The primary advantage of the adjoint approach is that the expense does not scale with the number of
design variables. Indeed, the solution of the adjoint system itself is independent of the number of
design variables. In practice, however, the discrete approach dictates that the effects of the mesh be
accounted for in the sensitivity derivatives, whereas these terms do not appear in the derivation of
the continuous operators. While perhaps counterintuitive, this difference is actually viewed as one
of the primary advantages of the discrete approach where, by definition, the sensitivity analysis is
discretely consistent with the flow simulation. If these grid-related contributions are not included, it
has been shown that the resulting sensitivities can be extremely inaccurate and even of the incorrect
sign.[29,30]. The continuous approach should converge to the discrete result in the case of a
sufficiently refined grid, but this condition is seldom, if ever, met in current practice for realistic
three-dimensional complex geometry computations.
Unfortunately, the expense of computing these grid-related terms has hindered large-scale
application of the discrete approach. For a single design variable, the use of a direct mode of
differentiation to obtain the derivative of a mesh movement scheme based on linear elasticity was
shown in Ref. 30 to cost as much as 30% of a flow simulation.
Clearly, the expense associated with performing this operation for several dozen design variables
will dominate the overall computation.
In the current work, an adjoint formulation is introduced that obviates the need for an explicit
computation of the grid sensitivities. Such an approach has previously been used in the automatic
differentiation (AD) community [36], however, no explicit formulations for hand coding such a
technique have been found in the literature. The scheme presented in the current work significantly
reduces the expense associated with the mesh linearization. Demonstrations of the method show that
a rigorous discrete sensitivity analysis for problems based on the Navier-Stokes equations may
ultimately be performed at a cost comparable to that of the analysis problem.
14.3.1.2 Mesh Sensitivities via Forward Mode Differentiation
The discrete adjoint technique for sensitivity analysis can be derived in several ways. Here, the
approach taken in Ref. 28 is used. Consider the vector of discretized residual equations R for the Euler
or Navier-Stokes equations as a function of the design variables D, computational mesh X, and flow
field variables Q. Given a steady-state solution of the form R (D, Q, X) = 0 , a Lagrangian function L
can be defined as
L(𝐃, 𝐐, 𝐗, Λ) = f(𝐃, 𝐐, 𝐗) + ΛT 𝐑(𝐃, 𝐐, 𝐗)
Eq. 14.10
where f (D, Q, X) represents a cost function to be minimized and Λ represents a vector of Lagrange
multipliers, or adjoint variables. Differentiating this expression with respect to D yields the following:
dL ∂f ∂𝐗 T ∂f ∂𝐐 T ∂f ∂𝐑 T ∂𝐑 T ∂𝐗 T ∂𝐑 T
={ +[ ] } + [ ] { + [ ] Λ} + {[ ] + [ ] [ ] } Λ
d𝐃 ∂𝐃 ∂𝐃 ∂𝐗 ∂𝐃 ⏟∂𝐐 ∂𝐐 ∂𝐃 ∂𝐃 ∂𝐗
0
Eq. 14.11
Since the vector of adjoint variables is essentially arbitrary, the coefficient multiplying ∂Q/∂D = 0 can
be eliminated using the following equation
∂f ∂𝐑
− = [ ] ΛT
∂𝐐 ∂𝐐
Eq. 14.12
340
The adjoint equation given in Eq. 14.12 represents a linear set of equations for the costate variables.
Although this system can be solved directly using GMRES, a time-like derivative is added and the
solution is obtained by marching in time, much like the flow solver:
V ∂𝐑 T n ∂f ∂𝐑
{ 𝐈 + [ ] }∆ Λ = − − [ ] Λn where Λn+1 = Λn + ∆n Λ
∆t ∂𝐐 ∂𝐐 ∂𝐐
Eq. 14.13
The time term can be used to increase the diagonal dominance for cases in which GMRES alone would
tend to stall. This ultimately results in a more robust adjoint solver. Due to the large amount of code
resulting from the linearization of the viscous terms and the turbulence model, these contributions
are stored in the present implementation. Because the stencil for the inviscid contributions is larger,
the linearization of these terms is recomputed at each step to avoid the need for extra storage and
data structure.
The solution of this linear system of equations for three-dimensional turbulent flows on unstructured
grids has been demonstrated previously in [28-31]. Once the solution for has been formed, the
remaining terms in Eq. 14.11 can be evaluated to give the desired sensitivity vector
dL ∂f ∂𝐗 T ∂f ∂𝐑 T ∂𝐗 T ∂𝐑 T
={ +[ ] } + {[ ] + [ ] [ ] } Λ
d𝐃 ∂𝐃 ∂𝐃 ∂𝐗 ∂𝐃 ∂𝐃 ∂𝐗
Eq. 14.14
The ∂X/∂D terms in Eq. 14.14 represent the mesh sensitivities. In Ref. 30, a mesh movement
strategy based on the equations of linear elasticity is described. Although generally not as expensive
as a flow field computation, the cost associated with solving these equations is not trivial. If the
system is posed as
𝐊𝐗 = 𝐗 surface
Eq. 14.15
then the mesh sensitivities may be computed from the following:
∂𝐗 ∂𝐗
𝐊 = ( )
∂𝐃 ∂𝐃 surface
Eq. 14.16
Note that the solution of this linear system is equivalent in cost to that of the mesh movement (Eq.
14.15), and must be obtained once for each design variable in D .
dL ∂f ∂𝐑 T ∂𝐐 T ∂f ∂𝐑 T
= + [ ] Λf + [ ] { + [ ] Λf } +
d𝐃 ∂𝐃 ∂𝐃 ∂𝐃 ⏟∂𝐐 ∂𝐐
0
T T
∂𝐗 ∂f ∂𝐑 ∂𝐗
[ ] { + [ ] Λ f + ΛTg K} − ΛTg [ ]
⏟∂𝐃 ∂𝐗 ∂𝐗 ∂𝐃 surface
0
Eq. 14.18
As before, the coefficient multiplying [∂Q/∂X]T can be eliminated by satisfying the adjoint equation
for the flow simulation, Eq. 14.12. In a similar fashion, the term multiplying [∂Q/∂X]T can also be
eliminated by satisfying a second adjoint problem:
∂f ∂𝐑 T
𝐊Λg = − { + [ ] Λf }
∂𝐗 ∂𝐐
Eq. 14.19
With the solution of Eq. 14.12 and Eq. 14.19, the final form of the sensitivity vector becomes
dL ∂f ∂𝐑 ∂𝐗
= + ΛTf [ ] − ΛTg [ ]
d𝐃 ∂𝐃 ∂𝐐 ∂𝐃 surface
Eq. 14.20
With the formulation outlined above, a single solution of the linear system given by Eq. 14.19 is
required for each function . If is some quantity composed of aerodynamic coefficients such as lift,
drag, or moments, several observations can also be made about Eq. 14.20. For design problems in
which the shape is held constant and only global parameters such as the angle of attack or Mach
number are allowed to vary, Eq. 14.19 need not be evaluated, and the third term in Eq. 14.20 is
identically zero. Conversely, for problems involving solely geometric parameterization variables, the
first and second terms in Eq. 14.20 are identically zero, as there is no explicit dependence of or on f
or R and D. In addition, the term is ΛTg [∂X/∂D]T very cheap to compute and only requires an explicit
inner product dimensioned by the size of the surface mesh for each design variable.
14.3.1.3.1 Implementation
The software developed in [28–30] has been extended to include the formulation outlined above. In
the previous implementation, the matrix–vector product Λf [∂R/∂X]T has been stored to enable rapid
computation of each inner product with ∂X/∂D , as these residual linearization are constant for all
shape parameters in . These mechanics are now used to construct the right-hand side of Eq. 14.19
in the current work.
The parameterization schemes employed here are described in [37 and 38], and rely on a free-form
deformation technique to provide a compact set of design variables for a wide range of
configurations. Given the current vector of design variables D, the methods are used to determine the
current location of the surface grid points as well as their analytic derivatives with respect to D,
[∂X/∂D]surface . The schemes are very inexpensive and can be used to consistently parameterize
families of computational meshes suitable for multiple disciplines.
The matrix K is formed using the method of [30], and is merely transposed once all of the
contributions have been included. The system given by Eq. 14.19 is then solved using the
Generalized Minimal Residual (GMRES) algorithm implemented in [30]. Since the eigenvalues of K
remain unchanged by the transpose operation, the solution for converges similarly to that of the
mesh movement scheme. In the current implementation, adjoint solutions for multiple functions f
342
may be computed simultaneously as outlined in [31] by storing multiple right hand sides for Eq.
14.19. Once the solution for has been computed, an explicit inner product with the surface mesh
sensitivities for each design variable yields the final sensitivity vector.
Table 14.3 Comparison of sensitivity derivatives for lift and drag coefficients using various approaches
14.3.5 References
1 Hicks, R.M., and Henne, P.A., "Wing Design by Numerical Optimization," Journal of Aircraft, Vol. 15,
1978, pp. 407-412.
2 Joh, C.-Y., Grossman, B., and Haftka, R.T., "Design Optimization of Transonic Airfoils," Engineering
Optimization, Vol. 21, 1993, pp. 1-20.
3 Vanderplaats, G.N., Hicks, R.N., and Murman, E.M., "Application of Numerical Optimization
Techniques to Airfoil Design," NASA Conference on Aerodynamic Analysis Requiring Advanced
Computers, NASA SP-347, Part II, March 1975.
4 Baysal, O., and Eleshaky, M.E., “Aerodynamic Sensitivity Analysis Methods for the Compressible Euler
Equations,” Journal of Fluids Engineering, Vol. 113, 1991, pp. 681-688.
5 Borggaard, J.T., Burns, J., Cliff, E.M., and Gunzburger, M.D., “Sensitivity Calculations for a 2-D Inviscid
Supersonic Forebody Problem,” Identification and Control Systems Governed by Partial Differential
Equations, SIAM Publications, Philadelphia, 1993, pp. 14-24.
6 Burgreen, G.W., and Baysal, O., “Aerodynamic Shape Optimization Using Preconditioned Conjugate
Gradient Methods,” AIAA Paper 93-3322, 1993.
7 Hou, G. J.-W., Maroju, V., Taylor, A.C., and Korivi, V.M., “Transonic Turbulent Airfoil Design
Optimization with Automatic Differentiation in Incremental Iterative Forms,” AIAA 95-1692, 1995.
8 Newman, J.C., and Taylor, A.C., “Three-Dimensional Aerodynamic Shape Sensitivity Analysis and
Design Optimization Using the Euler Equations on Unstructured Grids,” AIAA 96-2464, 1996.
9 Sherman, L.L., Taylor, A.C., Green, L.L., Newman, P.A., Hou, G.J.-W., and Korivi, V.M., “First- and
Second-Order Aerodynamic Sensitivity Derivatives via Automatic Differentiation with Incremental
Iterative Methods,” AIAA 94-4262, 1994.
10 Young, D.P., Huffman, W.P., Melvin, R.G., Bieterman, M.B., Hilmes, C.L., and Johnson, F.T.,
“Inexactness and Global Convergence in Design Optimization,” AIAA 94-4386, 1994.
11 Anderson, W.K., Newman, J.C., Whitfield, D.L., and Nielsen, E.J., "Sensitivity Analysis for the Navier-
Stokes Equations on Unstructured Meshes Using Complex Variables," AIAA Journal, Vol. 39, No. 1, 2001.
12 Lyness, J.N., “Numerical Algorithms Based on the Theory of Complex Variables,” Proc. ACM 22nd
Nat. Conf., Thomas Book Co., Washington, D.C., 1967, pp. 124-134.
13 Lyness, J.N. and Moler, C.B., “Numerical Differentiation of Analytic Functions,” SIAM Journal of
Numerical Analysis, Vol. 4, 1967, pp. 202-210.
14 Martins, J.R.R.A., Kroo, I.M., and Alonso, J.J., "An Automated Method for Sensitivity Analysis using
Complex Variables," AIAA 2000-0689, 2000.
15 Newman, J.C., Anderson, W.K., and Whitfield, D.L., “Multidisciplinary Sensitivity Derivatives Using
Complex Variables,” Mississippi State University Report No. MSSU-COE-ERC-98-08, 1998.
16 Squire, W. and Trapp, G., “Using Complex Variables to Estimate Derivatives of Real Functions,” SIAM
Review, Vol. 10, No. 1, 1998, pp. 110-112.
17 Anderson, W.K. and Bonhaus, D.L., "Airfoil Design on Unstructured Grids for Turbulent Flows,"
AIAA Journal, Vol. 37, No. 2, 1999, pp. 185–191.
18 Anderson, W.K. and Venkatakrishnan, V., "Aerodynamic Design Optimization on Unstructured
Grids with a Continuous Adjoint Formulation," Computers and Fluids, Vol. 28, No. 4, 1999.
19 Elliott, J., "Discrete Adjoint Analysis and Optimization with Overset Grid Modelling of the
Compressible High-Re Navier-Stokes Equations," 6th Overset Grid and Solution Technology
Symposium, Fort Walton Beach, FL, Oct. 2002.
20 Giles, M.B., Duta, M.C., Muller, J.-D., and Pierce, N.A., "Algorithm Developments for Discrete Adjoint
Methods," AIAA Journal, Vol. 41, No. 2, February 2003, pp. 198-205.
21 Iollo, A., Salas, M.D., and Ta’asan, S., "Shape Optimization Governed by the Euler Equations Using
an Adjoint Method," ICASE Report No. 93-78, November 1993.
22 Jameson, A., Pierce, N.A., and Martinelli, L., "Optimum Aerodynamic Design Using the Navier-Stokes
Equations," AIAA 97-0101, January 1997.
346
23 Kim, C.S., Kim, C., and Rho, O.H., "Sensitivity Analysis for the Navier-Stokes Equations with Two-
Equation Turbulence Models," AIAA Journal, Vol. 39, No. 5, May 2001, pp. 838–845.
24 Kim, H.-J., Sasaki, D., Obayashi, S., and Nakahashi, K., "Aerodynamic Optimization of Supersonic
Transport Wing Using Unstructured Adjoint Method," AIAA Journal, Vol. 39, No. 6, June 2001, pp. 1011–
1020.
25 Mohammadi, B., "Optimal Shape Design, Reverse Mode of Automatic Differentiation and
Turbulence," AIAA 97-0099, January 1997.
26 Nemec, M. and Zingg, D.W., "Towards Efficient Aerodynamic Shape Optimization Based on the
Navier-Stokes Equations," AIAA Paper 2001-2532, 2001.
27 Newman III, J.C., Taylor III, A.C., and Burgreen, G.W., "An Unstructured Grid Approach to Sensitivity
Analysis and Shape Optimization Using the Euler Equations," AIAA 95-1646, 1995.
28 Nielsen, E.J., "Aerodynamic Design Sensitivities on an Unstructured Mesh Using the Navier-Stokes
Equations and a Discrete Adjoint Formulation," Ph.D. Dissertation, Dept. of Aerospace and Ocean
Engineering, Virginia Polytechnic Inst. and State Univ., December 1998.
29 Nielsen, E.J. and Anderson, W.K., "Aerodynamic Design Optimization on Unstructured Meshes Using
the Navier-Stokes Equations," AIAA Journal, Vol. 37, No. 11, 1999, pp. 1411–1419.
30 Nielsen, E.J. and Anderson, W.K., "Recent Improvements in Aerodynamic Design Optimization on
Unstructured Meshes," AIAA Journal, Vol. 40, No. 6, 2002, pp. 1155–1163.
31 Nielsen, E.J., Lu, J., Park, M.A., and Darmofal, D.L., "An Implicit, Exact Dual Adjoint Solution Method
for Turbulent Flows on Unstructured Grids," Computers and Fluids, Vol. 33, No. 9, 2004.
32 Reuther, J.J., Jameson, A., Alonso, J.J., Rimlinger, M.J., and Saunders, D., "Constrained Multipoint
Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers," Journal of
Aircraft, Vol. 36, No. 1, 1999, pp. 51–60.
33 Soemarwoto, B., "Multipoint Aerodynamic Design by Optimization," Ph.D. Dissertation, Dept. of
Theoretical Aerodynamics, Delft University of Technology, December 1996.
34 Soto, O. and Lohner, R., "A Mixed Adjoint Formulation for Incompressible Turbulent Problems,"
AIAA 2002-0451, 2002.
35 Sung, C. and Kwon, J.H., "Aerodynamic Design Optimization Using the Navier-Stokes and Adjoint
Equations," AIAA 2001-0266, 2001.
36 Sundaram, P., Agrawal, S., and Hager, J.O., "Aerospace Vehicle MDO Shape Optimization Using
ADIFOR 3.0 Gradients," AIAA 2000-4733, 2000.
37 Samareh, J.A., "A Novel Shape Parameterization Approach," NASA TM-1999-209116, May 1999.
38 Samareh, J.A., "Aerodynamic Shape Optimization Based on Free-Form Deformation," AIAA 2004-
4630, 2004.
39 Saad, Y. and Schultz, M.H., "GMRES: A Generalized Minimal Residual Algorithm for Solving
Nonsymmetric Linear Systems," SIAM Journal of Scientific and Statistical Computing, Vol. 7, No. 3, 1986,
pp. 856–869.
40 Schmitt, V. and Charpin, F., "Pressure Distributions on the ONERA M6 Wing at Transonic Mach
Numbers," Experimental Database for Computer Program Assessment, AGARD-AR-138, May 1979,
pp. B1-1-B1-44.
41 Kleb, W.L., Nielsen, E.J., Gnoffo, P.A., Park, M.A., and Wood, W.A., "Collaborative Software
Development in Support of Fast Adaptive AeroSpace Tools (FAAST)," AIAA 2003-3978, 2003.
42 Jones, W.T., "GridEx - An Integrated Grid Generation Package for CFD," AIAA 2003-4129, 2003.
43 Marcum, D.L., and Weatherill, N.P., "Unstructured Grid Generation Using Iterative Point Insertion
and Local Reconnection," AIAA Journal, Vol. 33, No. 9, 1995, pp. 1619-1625.
44 Anderson, W.K., Rausch, R.D., and Bonhaus, D.L., "Implicit/Multigrid Algorithms for Incompressible
Turbulent Flows on Unstructured Grids," Journal of Computational Physics, Vol. 128, 1996.
45 Chorin, A.J., "A Numerical Method for Solving Incompressible Viscous Flow Problems," Journal of
Computational Physics, Vol. 2, 1967, pp. 12-26.
347
46 Love, M.H., Zink, P.S., Stroud, R.L., Bye, D.R., and Chase, C., "Impact of Actuation Concepts on
Morphing Aircraft Structures," AIAA 2004-1724, 2004.
47 Gill, P.E., Murray, W., Saunders, M.A., and Wright, M.H., "User’s Guide for NPSOL 5.0: A FORTRAN
Package for Nonlinear Programming," Technical Report SOL 94, 1995.
48 Kaufman, L. and Gay, D., "PORT Library: Optimization and Mathematical Programming - User’s
Manual," Bell Laboratories, 1997.
14.4 Case Study - Mesh Sensitivity Through An Algorithm to Detect High-Γ Regions
for Unstructured Mesh in Two Dimensions
Author : Hiroaki Nishikawa
Title : An Algorithm to Detect High-Γ Regions for Unstructured Grids in Two Dimensions
Source : https://doi.org/10.13140/RG.2.2.20233.80486/2
We will describe an algorithm to estimate the parameter Γ, which is a measure of how strongly a
mesh is thin and curved, and thereby detect high-Γ (Γ≥ 1) regions in a two-dimensional unstructured
mesh. It is well known that high-Γ mesh present challenges for accurate gradient computations and
stable finite-volume solver convergence as discussed in Refs.[1, 2, 3]. Accurate detection of high-Γ
regions will allow us to locally modify discretization algorithms for accurate and stable simulations.
However, reliable algorithms are not currently available, to the b3best of the author's knowledge, for
estimating Γ in unstructured grids. Therefore, discretization methods cannot be adaptively modified
and are often equipped with some other techniques applied to a whole grid, such as a limiter function
or an inaccurate gradient method, to avoid problems with high-Γ meshes. In order to maintain an
optimal nature of a target discretization with modifications applied only in high-Γ regions, it is
necessary to develop a reliable general algorithm to estimate Γ for arbitrary grids, including
especially fully irregular grids towards adaptive-mesh simulations. This note describes one such
algorithm for unstructured meshes in two dimensions. It generates also a pair of local-coordinate
basis vectors at each node, which can be used to perform a local approximate mapping for accurate
gradient computations and also stable solution reconstructions [1, 2, 3].
14.4.1.1 Algorithm to Compute Γ
Full developments concerning Γ could be obtained from [Nishikawa ]555.
14.4.2 Results
The proposed algorithm was applied to different types of irregular triangular grids.
➢ Isotropic grid in a square domain: The result is shown in Figure 14.9 (a). As desired, the
algorithm gives Γ = 0 for this non-curved grid.
➢ Anisotropic grid in an anisotropic rectangular domain: The result is shown in Figure 14.9
(b). As desired, again, the algorithm gives = 0 for this non-curved grid.
➢ Irregular triangular grid over a half-circular cylinder: This test case was used to show the
effect of boundaries. Again, the nodal coordinates are perturbed randomly in the
circumferential direction (see Figure 14.11 (b)). Despite the irregularity, the proposed
algorithm again estimated Γ as desired. See Figure 14.11 (c). For this case, we plotted the
local coordinate basis vectors. See Figure 14.11 (d) and Figure 14.11 (e).
➢ Irregular grid around a Joukowsky airfoil: The result is shown in Figure 14.10. As
expected, Γ is very large around the airfoil, especially at the trailing edge and then at the
555 Hiroaki Nishikawa, “ An Algorithm to Detect High-Γ Regions for Unstructured Grids in Two Dimensions”, 2019.
348
leading edge. The local coordinate basis vectors are shown in Figure 14.10 (f) and Figure
14.10 (g).
Figure 14.9 Results for irregular but non-curved triangular grids ( Γ = 0 everywhere ).
14.4.4 References
[1] B. Diskin and J. L. Thomas. Accuracy analysis for mixed-element finite-volume discretization
schemes. NIA Report No. 2007-08, 2007.
[2] B. Diskin and J. L. Thomas. Accuracy of gradient reconstruction on grids with high aspect ratio. NIA
Report No. 2008-12, 2008.
[3] B. Diskin, J. L. Thomas, E. J. Nielsen, H. Nishikawa, and J. A. White. Comparison of node-centered
and cell-centered unstructured finite-volume discretization: Viscous fluxes. AIAA J., 48(7):1326{1338,
July 2010.
349
Figure 14.10 Results for an irregular triangular grid over a Joukowsky airfoil. Note that the contours are
plotted in (c) and (d) with a restricted range, [0; 25], to visualize the variation near the airfoil. The
maximum value is 902,788 and the average is 10,505.
350
Figure 14.11 Results for an irregular triangular grid over a half-cylinder domain with straight
boundaries.
1. Run the initial simulation on your initial mesh and ensure convergence of residual error to
10-4, monitor points are steady, and imbalances below 1%. If not refine the mesh and repeat.
2. Once you have met the convergence criteria above for your first simulation, refine the mesh
globally so that you have finer cells throughout the domain. Generally we would aim for
around 1.5 times the initial mesh size. Run the simulation and ensure that the residual error
drops below 10-4, that the monitor points are steady, and that the imbalances are below 1%.
At this point you need to compare the monitor point values from Step 2 against the values
from Step 1. If they are the same (within your own allowable tolerance), then the mesh at
Step 1 was accurate enough to capture the result. If the value at Step 2 is not within acceptable
values of the Step 1 result, then this means that your solution is changing because of your
351
mesh resolution, and hence the solution is not yet independent of the mesh. In this case you
will need to move to Step 3.
you refine your mesh further. The answer comes through the question that emphasizes the
independence of numerical solution from grid structure, also called mesh. In every computational
analysis, mesh independence studies, also expressed as mesh convergence, ought to be conducted to
sustain credible results. Otherwise, the results that obtained would be considered as skeptical. (see
Figure 14.12)
353
15.1.1 Centaur®
➢ Singular points
➢ Small scales
➢ Small angles
➢ Disparate length scales
➢ Different scales orientation / directionality
➢ Mega Geometries - Mega geometries pose additional requirements to the grid generator.
Some common issues are the existence of thousands of panels and "dirty" CAD. Solutions to
these issues include:
• Automatic Setup
• Automatic identification of similar parts (e.g. pipes)
• Auto CAD Cleaning
➢ Mega Meshes - Mega meshes are required in applications such as LES, multi-stage
turbomachinery, aircraft take-off and landing, etc. (see Figure 15.1). Requirements that
have to be met for the generation of "mega" meshes include:
Figure 15.1 Mega Meshing for Aircraft Landing & Takeoff – Courtesy of Centaur©
354
• Optimum meshes (minimum number of elements for given accuracy - hybrid mesh
approach)
• Parallel / Multi-Core mesh generation
• Robustness of mesh generation
15.1.2 Pointwise®
General Guidelines include:
➢ Recommended grids have at least 2 layers of constant cell spacing normal to viscous walls
(Extrusion Layer, will be discussed next).
➢ Smoothing Extrusion Layers (see below).
➢ Achieving consistent cell sizes and spacing across gaps.
➢ Mesh quality quality/characteristics.
➢ Medium grid level ~ 3-4 hours per iteration.
Both techniques start from a tri or quad mesh and march outward, creating layers of cells (prisms
and hexahedra, respectively). T-Rex is an advancing layer technique that marches each grid point on
the extrusion front outward in a direction that is nominally orthogonal to the wall and with step sizes
prescribed to achieve the proper boundary layer resolution. The anisotropic tetrahedra produced by
joining each extruded point back to the extrusion front are combined to form stacks of prisms or
hexahedra. T-Rex includes extensive smoothing methods to control the extrusion trajectory, adjust
cell shapes, and avoid collisions with other extrusion fronts.
Algebraic extrusion in Pointwise consists of defined trajectories for the mesh to follow including
extrusion along a line, rotation around an axis, along a user-prescribed path, and normal to the initial
mesh, the latter useful for extruding layers of prisms from a triangular mesh in a manner that mimics
the behavior of the hyperbolic PDE method for extruding hexahedra from quads. A variety of
smoothing options is necessary to ensure that the algebraic techniques generate a non-folded mesh
simply due to the fact that they lack an elegant mathematical basis like the PDE methods.
The introduction of mixed-cell grids (i.e. surface grids that contain both triangles and quads, volume
grids that contain tetrahedra, hexahedra, pyramids, and prisms) required a new implementation of
smoothing in the extrusion methods to account for cell-to-cell variation in type. In addition to
supporting mixed cell types in the same grid, the goal of the new smoothing was to optimize element
shape and size to ensure good boundary layer resolution.
15.1.2.3 Optimization-Based Smoothing
The new smoothing method is based on optimization of a cost function related to element quality. In
particular, a cost function is defined at each mesh cell's corners as follows, where WCN is the
Weighted Condition Number and J is the Jacobian.
Ex Fx Gx
1
C = J if J ≤ 0.0 , C = if J > 0.0 where J = |Ey Fy Gy |
WCN
Ez Fz Gz
Eq. 15.1
Note that the Jacobian only comes into play for inverted corners and is computed as the determinant
of a 3 x 3 matrix created by columns of the edge vectors E, F, and G shown in Eq. 15.1.
15.1.2.4 Computing Weighted Condition Number (WCN) for Prisms
Consider the prism shown in Figure 15.2 for which computations
are being made at the lower left corner. WCN is computed from
two components: a matrix A created from the cell's actual edges
(Figure 15.2) and matrix W that transforms a right-angled corner
with unit edge lengths into the desired corner shape with the
desired physical edge lengths. The two quantities in the numerator
of the WCN equation below are the Fresenius norms of the matrix
products. Figure 15.2 Vectors used for
computing the weighted
Ex Fx Gx condition number of a prism at
‖AW −1 ‖‖wA−1 ‖ the corner shared by edges E, F,
A = [Ey Fy Gy ] , WCN = and G
3
Ez Fz Gz
Eq. 15.2
The formulation of matrix W is illustrated in Figure 15.3. The parameters U, V, W, and θ are derived
from the desired shape of the extruded cell. The height W is the marching step size and is usually
much smaller than U and V. Therefore; the weight matrix W is computed as
⃗U |V
⃗ |cosθ 0
⃗ |sinθ
W = [ 0 |V 0 ]
0 0 ⃗⃗⃗ |
|W
Eq. 15.3
356
Figure 15.3 Transforming an Ideal Corner (left) to the Desired Shape (right)
For a prism, consider computation of the weight matrix W for its bottom and top faces as illustrated
in Figure 15.4. For the bottom face (left), U, V, and θ are obtained from the extrusion front (layer k).
W is perpendicular to the bottom face with a length equal to the extrusion step size at that location.
For the top face (right), U, V, and θ are computed in a manner that influences the mesh to be optimal.
In particular, U and V for layer k+1 are set to the average of their counterparts on the bottom face
(layer k) and θ is set to π/3.
Figure 15.4 Computing the Weight Vector for the Bottom (left) and top (right) Faces of a Prism
Figure 15.5 Computing the Weight Vector for the Bottom (left) and Top (right) Faces of a Hexahedron
nj
1
⃗ avg =
P ∑(sn )j , MF = Max (0 , 1 − Cmin )P , AF = 1 − MF
nj
j=1
Cnode = MF ∗ Cmin + AF ∗ C avg , ⃗ node = MF ∗ P
P ⃗ min + AF ∗ P
⃗ avg
Eq. 15.4
Smoothing on the advanced layer is also limited by the extrusion step size h which corresponds to
the vector W presented Figure 15.4 and Figure 15.5. This is done because the step size is usually
significantly smaller than the edges U and V of the base cell and prevents possible inversions of the
cell. The factor ω is typically 0.5.
hω
⃗Xnode
i+1
= ⃗Xnode
i
+ ⃗Pnode
⃗ node |)
Max (1 , 0, |P
Eq. 15.5
Smoothing is iterative, and typically 5-200 iterations are used. Enforcement of the normal spacing is
built into the cost function along with the desired shape or quality.
358
Figure 15.7 40 Extrusion layers on the Symmetry Plane of the ONERA-M6 Wing at the Leading Edge
for Smoothing Exponent P = 0 (left) and P = 2 (right)
we're doing now. Furthermore, in my experience the implied caveat of automatic mesh generation is
that the desired mesh is generated without human intervention and also with the same result had
the user generated the mesh entirely by hand. We want to have our cake and eat it too. Paradoxically,
even a quantitative assessment such as “1-click” (i.e. automatic) meshing depends on when you start
and stop counting clicks.
Creating an automatic mesh generator; one that supports geometry of any format, type, or quality;
understands diverse configurations ranging from biological to automotive to chemical process to
aerospace; generates topology and cells supported by all available CFD solvers (especially yours);
and exports or shares data with your particular solver is not an easy job. Further, automatic methods
are plagued by the inevitable dead ends where 90 percent of the mesh is generated automatically but
the last 10 percent is either virtually impossible to complete or consumes days or even weeks of time.
But creating an automated mesh generator is a much more tractable task, especially when
automation is coupled with or built upon manual techniques that serve as backups when automation
goes astray.
15.1.3 Ansys®
ANSYS Meshing is a component of ANSYS workbench which incorporates meshing platform and
combines and builds on strengths of preprocessing offerings from ANSYS. It includes the tools like
ICEM CFD, TGRID, Fluent, CFX, Gambit558. The workflow is displayed in Figure 15.8.
➢ Automatically calculates global element sizes based on the smallest geometric entity
➢ Smart defaults are chosen based on physics preference
➢ Makes global adjustments for required level of mesh refinement
➢ Advanced Size Functions for resolving regions with curvatures and proximity of surfaces
558 Metin Ozen, Ph.D., ASME Fellow, Meshing Workshop, Ozen Engineering, Inc., 2014.
360
• 3D Operations
• Booleans, Decompose, etc.
• Geometry Cleanup and Repair
Geomery • Automatic Cleanup
Modification • Simplification, Mid-surface, Fluid Extraction
• Meshing Methods
• Hybrid Mesh: Tet, Prisms, Pyramids
• Hexa Dominant, Sweep meshing
• Assembly Meshing
• Global Mesh Settings
Meshing • Local Mesh Settings
• Sizing, Controls, etc.
• Fluid Flow
• Geometry
• Mesh
• Set Up
Solver • Solution
• Post-Processing
operations behave when they are applied in a sequence 559. Every meshing operation in COMSOL
Multiphysics creates a mesh that conforms to the respective geometry. But the tetrahedral mesh
generator, which operates under the Free Tetrahedral node in the Model Builder, is the only mesh
generator in 3D that is fully automatic and can be applied to every geometry. And since it creates
an unstructured mesh that is, a mesh with irregular connectivity, it is well suited for complex-shaped
geometries requiring a varying element size. Since tetrahedral meshes in COMSOL Multiphysics are
used for a variety of physics, including multi-physics, the mesh generator needs to be very flexible. It
should be possible to generate very fine meshes, very coarse meshes, meshes with fine resolution on
curved boundaries, meshes with anisotropic elements in narrow regions, etc.
• Advancing front-based generators, which pave the domain with tetrahedra, adding them one
by one beginning from a boundary
• Octree-based generators, which first decompose the domain with an octree and then
partition each octree cell into tetrahedra
• Delaunay-based generators, which maintain a geometric structure called the Delaunay
tetrahedralization of the domain and have remarkable mathematical properties
Figure 15.10 Skewness of Element Quality (Before Changing the Order of Meshing)
If we swap the order of the two Free Triangular nodes so that the operations are performed in
reverse order (with the right domain meshed first), we get different results. In the resulting plot,
Figure 15.11, we can see that the shared boundary now consists of finer mesh than before. As a
result, the right domain now consists entirely of fine elements, while the left domain has some fine
elements near the shared boundary. Consequently, the number of elements in the mesh has increased
363
and the Minimum element quality has almost doubled, which means that the overall quality of the
mesh has improved.
Figure 15.11 Skewness of Element Quality (After Changing the Order of Meshing)
15.1.4.4 3D Example
We will study a coil inside a box in 3D in order to see how these effects can appear in more advanced
geometries. The coil we are using is the adaptable coil Single Conductor Coil–Rectangular Wire,
Racetrack, Closed Side, available in the AC/DC Module Part Library (see Figure 15.12). In our model,
we add a box around the coil and adapt the coil so that the region between a pair of turns becomes
very narrow, meaning a very fine mesh is required between the turns to avoid low quality elements.
In this example, we would like to generate a mesh that is coarse in the surrounding box, a bit finer in
the coil, and sufficiently fine in the narrow regions between the turns.
364
Figure 15.12 The Coil Geometry - The Zoomed in View shows the Narrow Region Between the Coil
Turns
We start constructing our meshing sequence by setting the global Size node to the predefined
value Coarse. To obtain a mesh with sufficiently small elements in the narrow region, we have to
adjust the parameter Minimum element size so that we can resolve the narrow region, which has a
height of about 1.7e-4m. This is done by selecting Custom in the global Size node and editing
the Minimum element size to be 2e-4m, as in the following image. Next, we add two Free
Tetrahedral operations and select the coil in the first one and the surrounding box in the second.
To the first Free Tetrahedral node, the one acting on the coil, we add a local Size node set to the
predefined value Normal. In the plot below, Figure 15.13-A, we can see results similar to those from
the 2D examples: The narrow regions have elements of very poor quality, even though we specified
a small Minimum element size. Again, this is a result of the ordering of the meshing operations. When
the coil is meshed by the first operation, the narrow regions in the surrounding box do not act as a
constraint on the element size on the boundary. Therefore, the boundary mesh is generated
according to the specified mesh size on the coil, namely Normal. When the surrounding box is
meshed, the mesh of the shared boundary is fixed, hence the mesh elements in the narrow regions
are forced to have a skewed shape.
Next, we construct a new meshing sequence following our best practices. A single Free
Tetrahedral operation is added and applied to the entire geometry. We add a local Size node to the
operation with the default value Normal and the coil as the domain selection (as seen in Figure
15.13-B). The global Size node is set as in the previous sequence.
In conclusion, we have seen that the order of operations in a meshing sequence has an effect on the
resulting mesh. This is because the generated mesh is fixed, hence any mesh from a preceding
operation node is a starting point for following operations. For this reason, it’s best practice to use as
few operations as possible and either add Size nodes globally or locally. Additionally, if you need to
have multiple operations in your sequence for instance, if you want to have different element types
then it’s very important for you to consider their order.
➢ Surface wrapper: wraps the initial surface to provide a closed and manifold surface mesh
from a complex geometry. For poor quality or complex CAD, this procedure ensures that the
geometry is closed and of sufficient quality for generating surface and volume meshes. The
surface wrapper comes with a tool for leak detection and is usually used in conjunction with
the surface re-meshes
➢ Surface re-meshes: re-meshes the initial surface to provide a quality discretized mesh that
is suitable for CFD. It is used to retriangulate the surface based on a target edge length
supplied and can also omit specific surfaces or boundaries preserving the original
triangulation from the imported mesh.
➢ Trimmer: generates a volume mesh by cutting a hexahedral template mesh with the
geometry surface. It is recommended when an underlying custom mesh needs to be used or
if the surface quality is not good enough for a polyhedral mesh. Besides, it is useful in
modeling external aerodynamic flows due to its ability to refine cell in a wake region unsteady
and turbulent fluid caused by boundary layer separation.
➢ Polyhedral Meshes: generates a volume mesh that is composed of polyhedral-shaped cells.
It is numerically more stable, less diffusive, and more accurate than an equivalent tetrahedral
mesh. Moreover, also contains approximately five times fewer cells that a tetrahedral mesh
for a given starting surface.
➢ Tetrahedral Meshes: generates a volume mesh that is composed of tetrahedral-shaped cells.
According to CD-adapco563, tetrahedral meshes are only recommended when comparisons
have to be made with legacy tetrahedral models.
➢ Advancing Layer Meshes: creates a volume mesh composed of prismatic cell layers next to
wall boundaries and a polyhedral mesh elsewhere. The meshes creates a surface mesh on the
wall and projects it to create the prismatic cell layers. The prismatic cell layers help capture
the boundary layer, turbulence effects, and heat transfer near wall boundaries.
➢ Thin Meshes: generates a prismatic layered volume mesh for thin geometries, where good
quality cells are required to capture the solid material thickness adequately.
Sub-Models
much steeper in the viscous sublayer of a turbulent boundary layer that would be implied by
taking gradients from a coarse mesh.
• Extruder: generates an extruded mesh region from a boundary that one of the core volume
meshes has meshed. It is typically used for inlet and outlet boundaries to extend the volume
mesh beyond the original dimensions of the starting surface, so that a more representative
computational domain is obtained.
• Generalized cylinder meshes: generates a volume mesh appropriate for elongated cylindrical
regions. It uses extruded prismatic cells to reduce the overall cell count and improve the rate
of convergence in some cases.
• Shelling meshes: generates a shell mesh region from a boundary that one of the core volume
meshers has meshed. This model is specifically for modelling casting methods.
• Embedded thin meshes: similar to the default thin meshes it is also used to generate a
prismatic type mesh in predominantly thing geometries, it assumes that the thin geometries
are entirely contained within another region.
Extrusion Layer
Figure 15.16 Predominantly Polyhedral Meshing with Advanced (Extrusion) Layer in Boundaries
15.2.1 Case Study 1 - Mesh Size Functions for Implicit Geometries and PDE-Based Gradient
Limiting565
Mesh generation and mesh enhancement algorithms often require a mesh size function to specify the
desired size of the elements. We present algorithms for automatic generation of a size function,
discretized on a background grid, by using distance functions and numerical PDE solvers. The size
function is adapted to the geometry, taking into account the local feature size and the boundary
curvature. It also obeys a grading constraint that limits the size ratio of neighboring elements. We
formulate the feature size in terms of the medial axis transform, and show how to compute it
accurately from a distance function. We propose a new Gradient Limiting Equation for the mesh
grading requirement, and we show how to solve it numerically with Hamilton-Jacobi solvers. We
show examples of the techniques using Cartesian and unstructured background grids in 2D and 3D,
and applications with numerical adaptation and mesh generation for images.
15.2.1.1 Introduction
Unstructured mesh generators use varying element sizes to resolve fine features of the geometry but
have a coarse grid where possible to reduce total mesh size. The element sizes can be described by a
mesh size function h(x) which is determined by many factors. At curved boundaries, h(x) should be
small to resolve the curvature. In region with small local feature size (“narrow regions”), small
elements have to be used to get well-shaped elements. In an adaptive solver, constraints on the mesh
size are derived from an error estimator based on a numerical solution. In addition, h(x) must satisfy
any restrictions given by the user, such as specified sizes close to a point, a boundary, or a subdomain
of the geometry. Finally, the ratio between the sizes of neighboring elements has to be limited, which
corresponds to a constraint on the magnitude of ⊽ h(x).
In many mesh generation algorithms it is advantageous if an appropriate mesh size function h(x) is
known prior to computing the mesh. This includes the advancing front method566, the paving method
for quadrilateral meshes567, and smoothing-based mesh generators such as the one we proposed
in568-569. The popular Delaunay refinement algorithm typically does not need an explicit size function
since good, element sizing is implied from the quality bound, but higher quality meshes can be
obtained with good apriorism size functions. Many techniques have been proposed for automatic
565 Per-Olof Persson, “Mesh Size Functions For Implicit Geometries and PDE-Based Gradient Limiting”, Dept. of
Mathematics, Massachusetts Institute of Technology.
566 Peraire J., Vahdati M., Morgan K., Zienkiewicz O.C. “Adaptive Remeshing for Compressible Flow Computations.”
generation of mesh size functions, see 570-571-572. A common solution is to represent the size function
in a discretized form on a background grid and obtain the actual values of h(x) by interpolation, as
described in 15.2.1.2.1.
We present several new approaches for automatic generation of mesh size functions. We represent
the geometry by its signed distance function (distance to the boundary). We compute the curvature
and the medial axis directly from the distance function, and we propose a new skeletonization
algorithm with sub-grid accuracy. The gradient limiting constraint is expressed as the solution of our
gradient limiting equation, a hyperbolic PDE which can be solved efficiently using fast solvers.
570 Owen S.J., Saigal S. “Surface mesh sizing control.” Int. J. Num. Methods Eng., vol. 47, no. 1-3, 497–511, 2000.
571 Zhu J., Blacker T., Smith R. “Background Over-lay Grid Size Functions.” Proceedings of the 11th International
Meshing Roundtable, pp. 65–74. Sandia Nat. Lab., September 2002.
572 Zhu J. “A New Type of Size Function Respecting Pre-meshed Entities.” Proceedings of the 11th International
triangle (or tetrahedron) enclosing an arbitrary point x can still be done in logarithmic time, but the
algorithm is slower and more complicated.
(c) Unstructured
Figure 15.18 Background Grids for Discretization of the Distance Fnction and the Mesh Size Function
573 Osher S., Sethian J.A. “Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-
Jacobi formulations.” J. Computational Phys., vol. 79, no. 1, 12–49, 1988.
574 Sethian J.A. “Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi
assuming the geometry is well resolved. The remaining nodes are again obtained with the fast
marching method. We also mention the closest point transform by [Mauch]576, which gives exact
distance functions in the entire domain in linear time. A general implicit function φ can be
reinitialized to a distance function in several ways. [Sussman et al]577 proposed integrating the
reinitialization equation
φt + sin(φ) (|∇φ| − 1) = 0
Eq. 15.6
for a short period of time. Another option is to explicitly compute the distances to the zero level set
for nodes close to the boundary (e.g. using the approximate projections), and use the fast marching
method for the rest of the domain.
1. Curvature Adaptation - On the boundaries, we require h(x) ≤ 1/K |κ(x)|, where κ is the
boundary curvature. The resolution is controlled by the parameter K which is the number of
elements per radian in 2D (it is related to the maximum spanning angle θ by 1/K = 2
sin(θ/2)).
2. Local Feature Size - Adaptation Everywhere in the domain, h(x) ≤ lfs(x)/R. The local feature
size lfs(x) is, loosely speaking, half the width of the geometry at x. The parameter R gives half
the number of elements across narrow regions of the geometry.
3. Non-geometric - Adaptation An additional external spacing function hext(x) might be given
by an adaptive numerical solver or as a user-specified function (often at isolated points or
boundaries). We then require that h(x) ≤ hext(x).
4. Grading Limiting - The grading requirement means that the size of two neighboring
elements in a mesh should not differ by more than a factor G, or hi ≤ G hj for all neighboring
elements i, j. The continuous analogue of this is that the magnitude of the gradient of the size
function is limited by |⊽ h(x)| ≤ g, where g depends on the interpretation of the element sizes
but is approximately G − 1.
5. Optimality In addition to the above requirements (which are all upper bounds), we require
that h(x) is as large as possible at all points.
We now show how to create a size function h(x) according to these requirements, starting from an
implicit boundary definition by its signed distance function φ(x), with a negative sign inside the
geometry.
576Mauch S.P. Efficient Algorithms for Solving Static Hamilton-Jacobi Equations. Ph.D. thesis, Cal-tech, 2003.
577Sussman M., Smereka P., Osher S. “A level set approach for computing solutions to incompressible two-phase
flow.” J. Com. Phys., vol. 114, no. 1, 146–159, 1994.
373
where κ(x) is the curvature at x. In three dimensions we use the maximum principal curvature in
order to resolve the smallest radius of curvature. For an unstructured background grid, where the
elements are aligned with the boundaries, we simply assign values for h(x) on the boundary nodes
and set the remaining nodal values to infinity. Later on, the gradient limiting will propagate these
values into the rest of the region. The boundary curvature might be available as a closed form
expression (e.g. by a CAD representation), or it can be approximated from the surface triangulation.
For an implicit boundary discretization on a Cartesian background grid we can compute the
curvature from the distance function, for example in 2D:
Eq. 15.8
In 3D similar expressions give the mean curvature κH and the Gaussian curvature κK, from which the
principal curvatures are obtained as
κ1,2 = κH ± √κ2H − κK
Eq. 15.9
On a Cartesian grid, we use standard 2nd order difference approximations for the derivatives. These
difference approximations give us accurate curvatures at the node points, and we could compute
mesh sizes directly according to Eq. 15.7 on the nodes close to the boundary, and set the remaining
interior and exterior nodes to infinity. However, since in general the nodes are not located on the
boundary, we get a poor approximation of the true, continuous, curvature requirement Eq. 15.7.
Below we show how to modify the calculations to include a correction for node points not aligned
with the boundaries. In 2D, suppose we calculate a curvature κij at the grid point xij . This point is
generally not located on the boundary, but a distance |φij | away. If we set hcurv(xij) = 1/(K |κij| ) we
introduce two sources of errors:
1. We use the curvature at xij instead of at the boundary. We can compensate for this by adding
φij to the radius of curvature:
1 κij
κbound = =
1 1 + κij φij
+ φij
κij
Eq. 15.10
Note that we keep the signs on κ and φ. If, for example, φ > 0 and κ > 0, we should increase
the radius of curvature. This expression is exact for circles, including the limiting case of zero
curvature (a straight line).
2. Even if we use the corrected curvature κ bound, we impose our h curv at the grid point xij instead
of at the boundary. However, the grid point will be affected indirectly by the gradient limiting,
and we can get a better estimate of the correct h by adding g|φij|. Interpolation of the absolute
function is inaccurate, and again we keep the sign of φ and subtract gφij (that is, we add the
distance inside the region and subtract it outside).
Putting this together, we get the following definition of hcurv in terms of the grid spacing Δx:
374
1 + κij φij
| | − gφij , |φij | ≤ 2∆𝐱
hcurv (𝐱 ij ) = { Kκij
∞ , |φij | > 2∆𝐱
Eq. 15.11
This will limit the edge sizes in a narrow band around the boundaries, but it will not have any effect
in the interior of the region. A similar expression can be used in three dimensions, where the
curvature is replaced by maximum principal curvature as before, and the correction makes the
expression exact for spheres and planes.
578Ruppert J. “A Delaunay refinement algorithm for quality 2-dimensional mesh generation.” J. Algorithms, vol.
18, no. 3, 548–585, 1995.
375
large number of algorithms have been proposed. Many of them, including the original Grassfire
algorithm by [Blum]579, are based on explicit representations of the geometry. [Kimmel et al]580
described an algorithm for finding the medial axis from a distance function in two dimensions, by
segmenting the boundary curve with respect to curvature extrema. [Siddiqi et al]581 used a
divergence based formulation combined with a thinning process to guarantee a correct topology.
[Telea and Wijk]582 showed how to use the fast marching method for skeletonization and centerline
extraction.
Although in principle we could use any existing algorithm for skeletonization using distance
functions, we have developed a new method mainly because our requirements are different than
those in other applications. Maintaining the correct topology is not a high priority for us, since we do
not use the skeleton topology (and if we did, we could combine our algorithm with thinning). This
means that small “holes” in the skeleton will only cause a minor perturbation of the local feature size.
However, an incorrect detection of the skeleton close to the boundary is worse, since our Eq. 14.8
would set the feature size to a very small value close to that point.
We also need a higher accuracy of the computed medial axis location. Applications in image
processing and computer graphics often work on a pixel level, and having a higher level of detail is
referred to as sub-grid accuracy. A final desired requirement is to have a minimum number of user
parameters, since the algorithm should work in an automated way. Other algorithms typically use
fixed parameters to eliminate incorrect skeleton points close to curved regions. We use the curvature
to determine if candidate points should be accepted based on only one parameter specifying the
smallest resolved curvature.
Our method uses a Cartesian grid, but should be easy to extend to other background meshes. For all
edges in the computational grid, we fit polynomials to the distance function at each side of the edge,
and detect if they cross somewhere along the edge (see Figure 2 of [Per-Olof Persson]583). Such a
crossing becomes a candidate for a new skeleton point and we apply several tests, more or less
heuristic, to determine if the point should be accepted.
We scale the domain to have unit spacing, and for each edge we consider the interval s ∈ [−2, 3] where
s ∈ [0, 1] corresponds to the edge. Next we fit quadratic polynomials p1 and p2 to the values of the
distance function at the two sides of the edge, and compute their crossings. Our tests to determine if
a crossing should be considered a skeleton point are summarized below:
• There should be exactly one root s0 along the edge s ∈ [0, 1].
• The derivative of p2 should be strictly greater than the derivative of p1 in s ∈ [−2, 3] (it is
sufficient to check the endpoints, since the derivatives are linear)
• The dot product α between the two propagation directions should be smaller than a
tolerance,
• which depends on the curvatures of the two fronts (see below).
• We reject the point if another crossing is detected within the interval [−2, 3] with a larger
derivative difference dp2/ds − dp1/ds at the crossing s0.
579 Blum H. “Biological Shape and Visual Science (Part I).” Journal of Theoretical Biology, vol. 38, 205–287, 1973.
580 Kimmel R., Shaked D., Kiryati N., Bruckstein A.M. “Skeletonization via Distance Maps and Level Sets.” Computer
Vision and Image Understanding: CVIU, vol. 62, no. 3, 382–391, 1995.
581 Siddiqi K., Bouix S., Tannenbaum A., Zucker S. “The Hamilton-Jacobi Skeleton.” International Conference on
The dot product α is evaluated from one-sided difference approximations of ⊽φ. This is compared to
the expected dot product between two front from a circle of radius 1/|κ|, where κ is the largest
curvature at the two points. With one unit separation between the points and an angle θ between the
fronts, this dot product is
2
θ 2
|κ| κ2
cos(θ) = 1 − 2sin ( ) = 1 − 2 ( ) = 1 −
2 2 2
Eq. 15.14
We reject the point if the actual dot product α is larger than this for any of the curvatures κ1, κ2 at the
two sides of the edge or the given tolerance κtol. We calculate κ using difference approximations, and
to avoid the shock we evaluate it one grid point away from the edge. To compensate for this we
include a tolerance in the computed curvatures. If the point is accepted as a medial axis point, we
obtain the normal of the medial axis by subtracting the two gradients. The distance from the medial
axis to the two neighboring points are then |nxhs0| and |nxh(1 − s0)|. These are used as boundary
conditions when solving for φMA(x) in the entire domain using the fast marching method. Some
examples of medial axis detections are shown in Figure 3 of [Per-Olof Persson]584.
Per-Olof Persson, “Mesh Size Functions For Implicit Geometries and PDE-Based Gradient Limiting”, Dept. of
584
We present a new technique to handle the gradient limiting problem, by a continuous formulation of
the process as a Hamilton-Jacobi equation. Since the mesh size function is defined as a continuous
function of x, it is natural to formulate the gradient limiting as a PDE with solution h(x) independently
of the actual background mesh. We can see many benefits in doing this:
➢ The analytical solution is exactly the optimal gradient limited size function h(x) that we want,
as shown by (see [Persson]585). The only errors come from the numerical discretization,
which can be controlled and reduced using known solution techniques for hyperbolic PDEs.
∂h
+ |∇h| = Min (|∇h| , g) with I. C. h(𝐱, t = 0) = h0 (𝐱)
∂t
Eq. 15.16
When |⊽ h| ≤ g, Eq. 15.16 gives that ∂h/∂t = 0, and h will not change with time. When |rh| > g, the
equation will enforce |⊽ h| = g (locally), and the positive sign multiplying |⊽ h| ensures that
information propagates in the direction of increasing values. At steady-state we have that |⊽h| =
min(|⊽h|, g), which is the same as |⊽h| ≤ g. For the special case of a convex domain in Rn and constant
g, we can derive an analytical expression for the solution to Eq. 15.16, showing that it is indeed the
optimal solution. For further details and related theorems, please consult the [Per-Olof Persson]586
15.2.1.5.2 Implementation
One advantage with the continuous formulation of the problem is that a large variety of solvers can
be used almost as black-boxes. This includes solvers for structured and unstructured grids, higher-
order methods, and specialized fast solvers. The dashed lines are the initial conditions h0 and the
solid lines are the gradient limited steady-state solutions h for different parameter values g. On a
Cartesian background grid, the Eq. 15.16 can be solved with just a few lines of code using the
following iteration:
hn+1 n + +
ijk = hijk + ∆t (min (∇ijk , g ) , ∇ijk )
Eq. 15.17
Where
See Previous.
585
Per-Olof Persson, “Mesh Size Functions For Implicit Geometries and PDE-Based Gradient Limiting”, Dept. of
586
2 2
∇+
ijk = max (D
−x n
hijk , 0) + min (D+x hnijk , 0) +
2 2
max(D−y hnijk , 0) + min(D+y hnijk , 0) +
2 2
max (D−z hnijk , 0) + min (D+z hnijk , 0)
Eq. 15.18
Here, D−x is the backward difference operator in the x-direction, D+x the forward difference operator,
etc. The iterations are initialized by h0 = h0, and we iterate until the updates Δh(x) are smaller than a
given tolerance. The Δt parameter is chosen to satisfy the CFL-condition, we use Δt = Δx/2. The
boundaries of the grid do not need any special treatment since all characteristics point outward. The
iteration of Eq. 15.17 converges relatively fast, although the number of iterations grows with the
problem size so the total computational complexity is super linear.
Nevertheless, the simplicity makes this a good choice in many situations. If a good initial guess is
available, this time-stepping technique might even be superior to other methods. This is the case for
problems with moving boundaries, where the size function from the last mesh is likely to be close to
the new size function, or in numerical adaptivity, when the original size function already has
relatively small gradients because of numerical properties of the underlying PDE.
The scheme in Eq. 15.17 is first-order accurate in space, and higher accuracy can be achieved by
using a second-order solver. For faster solution of Eq. 15.16 we use a modified version of the fast
marching method (see Section 15.2.1.2.2). The main idea for solving our PDE (Eq. 15.16) is based
on the fact that the characteristics point in the direction of the gradient, and therefore smaller values
are never affected by larger values. This means we can start by fixing the smallest value of the
solution, since it will never be modified. We then update the neighbors of this node by a discretization
of our PDE, and repeat the procedure. To find the smallest value efficiently we use a min-heap data
structure. During the update, we have to solve for a new hijk in ⊽+ ijk = g, with ⊽+ ijk from (Eq. 15.18).
This expression is simplified by the fact that hijk should be larger than all previously fixed values of h,
and we can solve a quadratic equation for each octant and set hijk to the minimum of these solutions.
Our fast algorithm is summarized as pseudo-code in points, and we do not have any FAR points. The
actual update involves a nonlinear right-hand side, but it always returns increasing values so the
update procedure is valid. The heap is large since all elements are inserted initially, but the access
time is still only O(log n) for each of the n nodes in the background grid. In total, this gives a solver
with computational complexity O(n log n). For higher-order accuracy, the technique described in 587
can be applied.
An unstructured background grid gives a more efficient representation of the size function and
higher flexibility in terms of node placement. A common choice is to use an initial Delaunay mesh,
possibly with a few additional refinements. Several methods have been developed to solve Hamilton-
Jacobi equations on unstructured grids, and we have implemented the positive coefficient scheme by
[Barth and Sethian]588. The solver is slightly more complicated than the Cartesian variants, but the
numerical schemes can essentially be used as black-boxes. A triangulated version of the fast marching
method was given in, and the algorithm was generalized to arbitrary node locations. One particular
unstructured background grid is the octree representation, and the Cartesian methods extend
naturally to this case (both the iteration and the fast solver). The values are interpolated on the
boundaries between cells of different sizes. We mentioned in the introduction that octrees are
commonly used to represent size functions, because of the possibility to balance the tree and thereby
get a limited variation of cell sizes. Here, we propose to use the octree as Sian grids. The
587 Sethian J.A. “A fast marching level set method for monotonically advancing fronts.” Proc. Nat. Acad. Sci. U.S.A.,
vol. 93, no. 4, 1591–1595, 1996.
588 Barth T.J., Sethian J.A. “Numerical schemes for the Hamilton-Jacobi and level set equations on triangulated
computational complexity is O(n log n), where n is the number of nodes in the background grid. a
convenient and efficient representation, but the actual values of the size function are computed using
our PDE. This gives higher flexibility, for example the possibility to use different values of g.
15.2.1.6 Results
We are now ready to put all the pieces together and define the complete algorithm for generation of
a mesh size function. The size functions from curvature and feature size are computed as described
in the previous sections. The external size function hext(x) is provided as input. Our final size function
must be smaller than these at each point in space:
unstructured gradient limiting solver. The initial size function h0(x) is based on the local feature size
and the curved boundary at the top. Note that although the regions on the two sides of the slit are
close to each other, the small mesh size at the curved boundary does not influence the other region.
This solution is harder to express using source expressions such as (Eq. 15.21), where more
expensive geometric search routines would have to be used.
A more complicated example is shown in Figure 15.21. Here, we have computed the local feature
size everywhere in the interior of the geometry. We compute this using the medial axis based
definition from section 15.2.1.4. The result is stored on a Cartesian grid. In some regions the gradient
of the local feature size is greater than g, and we use the fast gradient limiting solver in (see Algorithm
2 in [Per-Olof Persson]589) to get a well-behaved size function. We also use curvature adaptation as
before. Note that this mesh size function would be very expensive to compute explicitly, since the
feature size is defined everywhere in the domain, not just on the boundaries.
Per-Olof Persson, “Mesh Size Functions For Implicit Geometries and PDE-Based Gradient Limiting”, Dept. of
589
As a final example of 2D mesh generation, we show an object with smooth boundaries in Figure
15.22. We use a Cartesian grid for the background grid and solve the gradient limiting equation using
the fast solver. The feature size is again computed using the medial axis and the distance function,
and the curvature is given by the expression with grid correction (Eq. 15.11) since the grid is not
aligned with the boundaries. The PDE-based formulation generalizes to arbitrary dimensions, and in
Figure 15.23 we show a 3D example. Here, the feature size is computed explicitly from the geometry
description, the curvature adaptation is applied on the boundary nodes, and the size function is
computed by gradient limiting with g = 0.2. This results in a well-shaped tetrahedral mesh, in the
bottom plot.
• numerical adaptation,
• mesh generation for images,
• non-constant g values.
increase κtol to compensate for the slightly noisy distance function close to the boundary. (see)590. All
techniques used for meshing the two dimensional images extend directly to higher dimensions. The
image is then a three-dimensional array of pixels, and the binary mask selects a sub-volume.
Examples of this are the sampled density values produced by computed tomography (CT) scans in
medical imaging, which we created mesh size functions and tetrahedral meshes.
Figure 15.26 Meshing objects in an image Figure 15.25 Gradient limiting with space-
dependent g(x).
590The segmentation is done with an image manipulation program, the distance function is computed by
smoothing and approximate projections, and the size function uses the curvature, the feature size, and gradient
limiting.
385
➢ Other feature size algorithms. We have experimented with a PDE-based algorithm that
convect the distance function along its characteristics, and a simpler direct algorithm that
finds the largest spheres without first extracting the medial axis.
➢ Other background meshes for the feature size. We have worked exclusively with Cartesian
and octree grids for the medial axis based feature size calculations, and a version for
unstructured meshes would be useful.
➢ Implementing a fast marching based solver for triangular/tetrahedral background meshes.
386
15.2.2 Case Study 2 - Neighborhood-Based Element Sizing Control for Finite Element Surface
Meshing594
A method is presented for controlling element sizes on the interior of areas during surface meshing.
A Delaunay background mesh is defined over which a neighborhood based interpolation scheme is
used to interpolate element sizes. A brief description of natural neighbor interpolation is included
and compared to linear interpolation. Two specific applications are presented that utilize the sizing
function, namely boundary layer meshing and surface curvature refinement. For these applications,
criteria used for insertion of additional interior vertices into the background mesh to control element
sizing is discussed.
15.2.2.1 Introduction
In recent years CAD software has become a popular means for generating geometry for the finite
element method. The surfaces and volumes described by CAD can come in a variety of forms including
parametric surfaces such as NURBS or tessellated geometry consisting of triangle facets. Because CAD
models will often have regions of high curvature or very tiny features in the same model with larger
or flatter features, automated mesh generation resulting in high quality, well-graded finite elements
can be a difficult task. In order to adequately describe all features of the model without generating
huge numbers of elements, large transitions in element sizes can be required.
To control element sizes during the mesh generation process, an element sizing function can be
defined. When meshing, this function may take into account surface features, proximity to other
surfaces, surface curvature as well as physical properties such as boundary layers, surface loads or
error norms in determining local element sizes.
While a wide variety of geometric and physical properties may be used in determining element sizes,
the question remains as to how these can be combined to define a single smooth sizing function over
the domain of the surface. This paper discusses a neighborhood-based interpolation method that can
take into account any of the above features while maintaining a smoothly varying sizing function. It
will also look at two specific surface meshing problems that can be handled using this approach,
namely surface curvature and boundary layer meshing.
There are a great many aspects that can affect the final quality of a surface mesh. This paper focuses
on only one of these, that of describing the background sizing function. The definition of the sizing
function is defined as a pre-process to the actual surface meshing algorithm. During the meshing
process, the sizing function is periodically evaluated providing information to control local element
sizes. Although relatively insignificant in total CPU time compared to the entire surface meshing
591 Kimmel R., Sethian J.A. “Fast Marching Methods on Triangulated Domains.” Proceedings of the National
Academy of Sciences, vol. 95, pp. 8341–8435. 1998
592 Covello P., Rodrigue G. “A generalized front marching algorithm for the solution of the eikonal equation.” J.
Meshing”, Department of Civil and Environmental Engineering, Carnegie Mellon University And ANSYS Inc. 275
Technology Drive, Cannonsburg, PA, USA.
387
process, the sizing function can ultimately influence the final quality and grading of the elements,
perhaps more than any other phenomenon.
Figure 15.28 Delaunay Tessellation of Boundary Vertices used for Background Mesh
595 Canann, Scott A., Yong-Cheng Liu and Anton V. Mobley (1997), “Automatic 3D Surface Meshing to Address
Today’s Industrial Needs”, Finite Elements in Analysis and Design. Elsevier, Vol. 25, pp.185-198
596 Lohner, Rainald (1996), “Extensions and Improvements of the Advancing Front Grid Generation Technique,”
Communications in Numerical Methods in Engineering, John Wiley & Sons, Vol. 12, pp. 683-702.
388
locate a point in the Delaunay background mesh, a process in itself which can be somewhat time
consuming, it is advantageous to maintain a background mesh as sparse as possible, while still
maintaining the important sizing details of the model.
The method used for interpolation can greatly affect the quality of the resulting mesh. Although
simple linear interpolation is frequently used, some weaknesses have been noted with this method.
An improved method, known as natural neighbor interpolation is used to define the sizing function.
dx = ∑ w i di
i=0
Eq. 15.22
where di are the element sizes at each of the three vertices of the triangle in the Delaunay background
mesh containing Px, and wi are the barycentric coordinates of Px within the same triangle; for
example:
𝑥 𝑥1 𝑥2 𝑥0 𝑥 𝑥2 𝑥0 𝑥1 𝑥
|𝑦 𝑦1 𝑦2 | |𝑦0 𝑦 𝑦2 | |𝑦0 𝑦1 𝑦|
w0 = 𝑥1 1𝑥
1 ,
𝑥 w1 = 𝑥1 𝑥1 𝑥1 , w2 = 𝑥1 𝑥1 𝑥1
0 1 2 0 1 2 0 1 2
𝑦
| 0 𝑦1 𝑦2| 𝑦
| 0 𝑦1 𝑦2| 𝑦
| 0 𝑦1
𝑦2|
1 1 1 1 1 1 1 1 1
Eq. 15.23
In some cases, linear interpolation provides an adequate description of the element sizing function.
Poor results can arise when the triangles in the background Delaunay mesh become poorly shaped.
Since in most cases, only the boundary nodes are tessellated, long, skinny triangles are typical similar
to those shown in Figure 15.28. As a result, abrupt changes in element size are common resulting
in less than desirable element quality and transitions.
597 Sibson, R., (1981), “A Brief Description of Natural Neighbor Interpolation,” Interpreting Multivariate Data,
John Wiley & Sons, New York, pp. 21-36.
598 Owen, Steven J., (1992), An Implementation of Natural Neighbor Interpolation in Three Dimensions, Master’s
n−1
dx = ∑ wi di
i=0
Eq. 15.24
where n is now the number of neighbor vertices. The distinguishing features of natural neighbor
interpolation include the manner in which the neighbor vertices are selected and the computation of
the weight, wi at each of the vertices. The n vertices included in the interpolant are defined as all
vertices belonging to triangles whose circumcircle contain Px. This criteria is exactly that used in the
well-known Delaunay triangulation algorithm described by [Watson]601. Since the process of
computing the neighboring triangles is an integral part of the Delaunay triangulation procedure, this
information can not only be used to insert a new vertex into the mesh, but to compute the
interpolated element size at the same vertex.
The weights, wi at each of the neighboring vertices are defined using what [Sibson]602 first described
as local coordinates. Local coordinates are often thought of as generalized barycentric coordinates.
While barycentric coordinates can be defined as describing space with respect to three points in R2,
local coordinates define space with respect to an arbitrary number of points. The local coordinate, wi
is defined as follows:
κ(πi )
wi =
κ(π)
Eq. 15.25
where () is the area of the Voronoi polygon, defined after Px is inserted into the domain and (i)
is the difference in areas of the Voronoi polygon i at vertex i before and after Px is inserted. The
Voronoi polygon associated with vertex i is that space in R2 closer to vertex i than any other vertex in
the mesh. More precisely, the Voronoi polygon i associated with vertex Pi is defined as:
πi = {x ϵ R2 ∶ |x − Pi | < |x − Pj | , j = 1, , , , , N, j ≠ i}
Eq. 15.26
where N is the total number of vertices in the mesh. Error! Reference source not found. shows a g
raphical representation of local coordinates. In this example, the interpolant is Px. Solid lines
represent the Voronoi polygons for each vertex in the domain after temporarily inserting Px into the
domain. The dashed lines represent the Voronoi polygons before insertion. In this case, vertices 1, 4,
5, 6 and 9 are selected as neighbors, and used to weight the interpolation. The area, () is the
Voronoi polygon defined by Px and (i) are the difference in Voronoi areas of polygons before and
after Px is inserted for each neighbor vertex. From this example it can be seen that the sum of (i) will
be () hence, from Eq. 15.25, the sum of weights, wi, will be one.
A detailed description of the calculation of the local coordinates is provided by the [author]603 and
[Watson]604. The author describes a simple method for computing the areas of Voronoi polygons
based on triangle circumcenters. Watson describes a more efficient method, which avoids the explicit
formulation of the Voronoi polygons.
601 Watson, David, F. (1981), “Computing the Delaunay Tesselation with Application to Voronoi Polytopes”, The
Computer Journal, Vol 24(2) pp. 167-172
602 Sibson, R., (1981), “A Brief Description of Natural Neighbor Interpolation,” Interpreting Multivariate Data,
15.2.2.5 Applications
In many cases, it is sufficient to use only the boundary nodes of the area as input to describe the sizing
function. If the boundary divisions have been adequately defined and there is very little surface
curvature within the area, the resulting sizing function can produce very high quality elements. The
sizing may not be sufficient when internal surface features, not described by line divisions are
required, or if physical phenomenon requiring smaller element sizes close to boundaries is required.
In these cases, additional internal nodes may be inserted into the background mesh to more precisely
define the element sizing function.
Pc = Pb + ‖Nb ‖. β Pt = Pb + ‖Nb ‖. (β + χ )
,
0 1
where Nb = [ ] . ‖Pb1 − Pb0 ‖
−1 0
Eq. 15.27
605 Clements, Jan, Ted Blacker and Steven Benzley (1997), “Automated Mesh Generation in Boundary Layer
Regions Using Special Elements”, AMD-Vol.220 Trends in Unstructured Mesh Generation, ASME, pp.137-143.
606 Pirzadeh, Shahyar, (1993), “Unstructured Viscous Grid Generation by Advancing-Layers Method,” AIAA-93-
3453-CP, pp.420-434
391
Figure 15.31 Background Mesh used for Boundary Layer Mesh. Contours Generated From Natural
Neighbor Interpolation are Over-Laid
Watson, David, F. (1981), “Computing the Delaunay Tesselation with Application to Voronoi Polytopes”, The
607
proposed vertex is only inserted if the interpolated element size at the new vertex is different by
more than a predefined percentage, from the new element size, di at the vertex. For this application,
of 5% was used.
One problem which can arise from this procedure occurs when the boundary layers or transition
layers of opposing boundaries intersect. Vertices with widely varying element size definitions can
end up being placed close together resulting in very poor element quality. To ensure this does not
occur, Pt cannot be within a radius of + from any other boundary edge. A local distance check is
made to ensure this does not occur. In the event Pt is closer to a nearby edge than +, the distance
is iteratively cut back until this requirement is satisfied. If this occurs, a new equivalent dt must also
be defined. The new dt is linearly interpolated between the old dt and the size dc at Pc.
h/2
rc =
√1 − ‖NA ‖. ‖NB ‖
2
Eq. 15.29
Figure 15.34 shows graphically the relationships
from which Eq. 15.29 is derived. The normal, NA and
NB with vector AB form a triangle from which
trigonometric relationships can be developed and
where is defined as, NA NB. This relationship is
exact for a quadratic surface (ie. sphere, cylinder) but
is only an approximation for arbitrary surfaces. Even
though exact curvature can be easily computed for
some parametric surface types, for faceted geometry, Figure 15.34 Approximation of Radius of
curvature is generally not readily available and can Curvature Between Two Points A,B on a
be time consuming to compute. For this reason, Surface
normal are used to approximate the radius of
curvature. The accuracy of the resulting radius of curvature will in general improve as h gets small.
Depending on whether the meshes works directly in parametric space or in 3D space it may also be
necessary to apply a scaling factor to the resulting maximum element size, d computed. If parametric
space is used, a local scaling factor that maps the size d to an equivalent size, d’ in parametric space
is needed as follows:
h′
d′φ =[ ]d
rc cos −1 ‖NA ‖. ‖NB ‖ 𝜑
Eq. 15.30
608 deCougny, H.L., and M.S. Shephard (1996), “Surface Meshing Using Vertex Insertion”, Proceedings, 5th
International Meshing Roundtable, Sandia National Laboratories, pp. 243-256.
394
where h’ is the distance in parametric space between points A and B on the surface. The denominator
in the above scale factor is simply an expression for the arc length in world space on the surface
between the same two points.
Now that a controlling maximum element size has been defined based on a user defined maximum
spanning angle, the question still remains as to where new sizing vertices should be placed into the
background mesh so that the element sizing function will better describe the surface curvature. It is
clear that a random or a gridded distribution of vertices in the background mesh will be inadequate
R since regions of maximum curvature can very easily be missed. It is typical that only in these
regions of high curvature that additional sizing vertices need to be placed. The approach taken for
this application was to use a quadtree decomposition of the parametric space of the area.
The quadtree is initialized by evaluating the normal over an N by M grid of points on the surface. N
and M should be chosen so that a reasonable initial representation of the surface can be established.
A maximum value for N or M defined as 10, appeared to provide sufficient information for most
surfaces tested. Each set of four points defining a quadrilateral in the N by M grid serve as the root of
a quadtree.
The dot product between the normal at the corners of the quadrilateral are computed. The minimum
dot product between any two corners is used to approximate the radius of curvature, rc and
maximum element size d. The current leaf of the quadtree is subdivided into its four child
quadrilaterals if the angle spanned over the quadrilateral is less than the maximum spanning angle,
φ. The quadtree subdivision is also limited by a minimum allowable element size or maximum
number of subdivisions. This is to ensure the quadtree does not go forever on surfaces with sharp
folds or discontinuities.
A new vertex is considered for insertion into the background mesh only at the local maximum
quadtree level. That is, any quadtree leaf that is not split into four children defines a potential vertex
at the centroid of its
quadrilateral. The size, dx
defined at the vertex will be
the smaller of the element
size, d, defined from the
surface curvature, or dx
defined from an
interpolation at the same
point before any
modifications for curvature.
Similar to boundary layers, a
vertex is only inserted into
the background mesh if the
resulting change to the local
element size is greater than .
Although in many cases, the
background mesh is
sufficient to control the
quality and sizing of the
mesh, some additional
measures can also be taken.
If the advancing front
method is used for meshing,
it is possible that for large Figure 15.35 Background mesh and contours of sizing function for
parametric surface
transitions in element size,
the smaller element regions
395
can be missed. Typically the size of a new element at the front is determined from an interpolation of
the background mesh.
For large element sizes, the location interpolated may miss the smaller regions. This can generally be
avoided by keeping an ordered list of fronts sorted by size while meshing. If the smallest fronts are
always dealt with first, the sizing function can be better captured. A set of bins containing fronts of
approximately the same size is sufficient for maintaining the size ordering. Alternatively, rather than
a single interpolation into the background mesh, multiple interpolations can be performed at the
vertices and/or centroid of the proposed new triangle in the mesh. A weighted average of the
computed sizes can be used. Figure 15.35 shows the background mesh for a simple parametric
surface along with the resulting contours generated using natural neighbor interpolation. Figure
15.36 shows the resulting triangle mesh using both a maximum spanning angle, φ of 15 and 30
degrees.
Figure 15.36 Parametric surface meshed with two different max spanning angles (φ). Left φ=15
degrees, right: φ=30 degrees
15.2.2.8 Conclusion
Natural Neighbor interpolation has been proposed as a new method for providing element size
information to surface meshing algorithms. High quality elements can be generated in situations
requiring very large transitions or discretization of highly curved surfaces. While only two specific
criteria were used for defining element sizing information, many others can be developed and
combined. Future directions will inevitably be application to three dimensions. Three dimensional
boundary layer meshing is also an important topic, particularly for CFD applications. While the
interpolation method presented has been discussed with respect to isotropic sizing, extensions to
incorporate anisotropic criteria will also be considered. For further info, please consult the work by
396
15.2.3.1 Introduction
In the mesh generation field, the mesh size control is very critical to mesh quality and to the
successful field simulations using the generated mesh. The mesh sizes need to catch local details in
areas of the geometry where small features exist. On the other hand, in non-critical areas of the
geometry, the mesh size can be large as long as the mesh transition is smooth enough. However, it is
tedious to manually determine the local features of the geometry and mesh these entities by desired
sizes. Pre-meshing boundaries of the domain with the desired size is a standard way of obtaining size
transition and gradation. However, the user has no direct control over the mesh grading on the
geometry. Local refinement of an existing mesh is another option. Unfortunately, unrefined meshes
will form a fixed constraint to the refined areas and results are not always satisfactory.
(a) Circle pre-meshed (b) Face directly meshed (c) Face meshed with
with size 0.05, face with fixed size function Quads
meshed with size 1.0
Figure 15.37 Comparison of meshing results from: (a) pre-meshed inner circle with no growth control
and (b) from size function with growth controlled
As a simple example of the importance of size functions, consider Figure 15.37. The geometry is a
10 x 10 square with a circular hole of radius 0.5. In Figure 15.37 (a) the inner circle is pre-meshed
with a size of 0.05 and rest of the face is meshed with a size of 1.0 by an advancing front triangle
meshing algorithm. Transitions are handled by the algorithm itself, with no reliance on a size
function. A total of 5,656 elements are generated. In Figure 15.37 (b), the same algorithm is used,
but a size function is prescribed which guides the meshing. The size function used prescribes a size
of 0.05 at the hole boundary and a geometric growth rate of 1.2 based on the distance from the hole.
609 Steven J. Owen and Sunil Saigal, “Neighborhood-Based Element Sizing Control for Finite Element Surface
Meshing”, Department of Civil and Environmental Engineering, Carnegie Mellon University And ANSYS Inc.
275 Technology Drive, Cannonsburg, PA, USA.
610 Jin Zhu, Ted Blacker, Rich Smith, “Background Overlay Grid Size Functions”, 2002.
397
This growth is limited by a maximum size specified as 1.0. There are only 1,950 elements generated
in this case. You can see that the mesh gradation is well controlled by the growth rate in Figure
15.37 (b) compared with the mesh pattern in Figure 15.37 (a). When meshing the same face with
a quadrilateral/paving algorithm611, no mesh could be obtained without a size function because of
the extreme gradation difference and the lack of interior gradation control. With the size function it
can be meshed nicely, as shown in Figure 15.37 (c).
During the meshing process it is highly desirable that some guidance be provided to the mesh tools
to specify the size of elements to be defined and the variation of size from one part of the domain to
another. Sizing and gradation control can be determined during the meshing process or more
commonly as an a priori procedure. As an a priori procedure, a size function is defined over the entire
domain. The sizing function, d = f (x), where d is the target element size and x is the location in the
domain, can be customized for specific geometric or physical prosperities. The sizing function may
take into account surface features as well as physical properties in determining local element sizes.
Such surface features as proximity to other surfaces and/or surface curvature can be used to control
surface mesh density distribution. Physical properties such as boundary layers, surface loads or error
norms from a previous solution may be considered. For instance, in an adaptive finite element
scheme, a size specification in the simulated field is deduced from simulation results, usually via an
error estimate. This may then be combined with face geometric constraints being considered. The
size specification is then normalized by metrics and this metric map that defines a control space is
used to control the mesh gradation612.
Many authors have described the use of some form of element size control in the literature for a
specific meshing algorithm. Based on the spatial decomposition approach for meshing purposes as
pioneered about two decades ago by [Yerry & Shepard]613 and surveyed by [Thacker]614 and
[Shepard]615, a size-governed quadtree triangle mesh generation method was presented by [Frey and
Marechal]616 to deal with planar domains of arbitrary shape. The domain is first decomposed into a
set of cells. The size of the these tree cells are adjusted to match the element sizes at boundaries of
the domain prescribed by a given size map, and the mesh gradation is controlled by the level of
refinement of the cells using the [2:1] rule. Therefore, these cells have a size distribution compatible
with the desired mesh gradation and so can provide a convenient control space which can be used to
determine the element size. Secondly, the quadrants are triangulated accordingly to get full triangle
elements. Finally the triangles are optimized (i.e. smoothed).
Currently, a background mesh appears to be the most commonly used means of defining an
element sizing function. In the background mesh method, collections of vertices containing the
sizing information are first selected. Then Delaunay triangulation is performed with them, inserting
additional interior nodes. Finally the meshing tool retrieves a target size at any location within the
domain by linear interpolating in a certain background triangle (for 2D) or tetrahedra (for 3D).
611 Ted D. Blacker, Michael B. Stepheson, “Paving: A new approach to automated quadrilateral mesh generation”,
Int. J. Num. Methods Eng. Vol 32, pp. 811-847 (1991).
612 Houman Borouchaki, Frederic Hecht and Pascal Frey, Mesh gradation control, Proceedings of 6th
International meshing roundtable. Oct. 13-15, 1997. Park City, Utah, USA.
613 M.A. Yerry and M.S. Shepard, “A modified-quadtree approach to finite element mesh generation”, IEEE
[Pirzadeh]617 introduced an approach that adopted uniform Cartesian grid and the elliptic grid point
distribution for generation of 2D unstructured mesh using the advancing front technique. It was
analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing
parameters of grid points were distributed over the nodes of the Cartesian background grid by
solving a Poisson equation. To increase the control over the grid point distribution, a directional
clustering approach was also implemented. However, there will be some mathematical difficulties
when it is used for general 3D problems and/or with non-nodal and non-linear sources. More
recently, [Owen and Saigai]618 presented the method of controlling element size on parametric
surfaces, taking into account boundary layers, surface curvature and anisotropy, and using natural
neighborhood interpolation. Related works using background mesh can be found in619-620-621.
Although the algorithms discussed above are effective and useful in many aspects, neither gives a
general and versatile way of size control for all kinds of geometry and all types of element. The goal
of this work was to create a general way of defining mesh size for all element types and for different
kinds of geometric features. The size function had to provide very rapid evaluators that would be
general for any meshing algorithm. Also, local geometric effects had to be able to radiate, or influence
size on a more non-local area. For example, tight curvature on one surface should affect other
edges/surfaces in close proximity to ensure a controlled transition rate. This paper describes how
these objectives were met using a background overlay grid. The work will be described by first
defining the size functions provided to the user and the size function initializations. Then details of
the use of a background grid are documented and examples of its use given.
➢ Source entities: Source entities are a set of geometric entities on which the mesh sizes are
specified and from which the mesh size is grown into affected areas. Source entities can be
any general geometric type including vertex, edge, face or volume.
➢ Attached entities: the geometric entities on which the size functions will have influence as
the entities are meshed. These include edge, face or volume. The attached entity can be the
same entity as the source.
When a size function is attached to an upper topology, all lower topologies of the attached entity will
be influenced. When a size function is attached to a lower topology, its upper topologies will not be
affected.
➢ Growth rate: This parameter controls the geometric pace with which the mesh size
progresses from the source. It is based on the distance of the point being evaluated from the
source.
617 Shahyar Pirzadeh, “Structured background grids for generation of unstructured grids by advancing-front
method”, AIAA Journal. Vol 31(2), pp. 257-265(1993).
618 Steven Owen and Sunil Saigal, “Surface mesh sizing control”, Int. J. Num. Meth. Eng. Vol 47, (2000).
619 J.Z. Zhu,, O.C. Zienkiewicz,, E. Hinton, J. Wu, “A new approach to the development of automatic quadrilateral
mesh generation”, Int. J. Num. Meth. Eng. Vol 32, pp. 849-866, (1991).
620 S.A. Canann,, Y.C. Liu, A.V. Mobley, “Automatic 3D surface meshing to address today's industrial needs”, Finite
In cases where the elements of significantly different sizes are immediately adjacent to each other,
both the meshing tool and the simulation tool cannot perform well. In order to maintain a desired
growth ratio, the target size is adaptively adjusted by applying a geometric growth formula. This
parameter specifies the rate of this geometric progression.
➢ Distance limit: This variable specifies the range in which the size function is valid. It is the
distance for the source mesh size to grow up to the size limit, but it is not user controlled.
gap between any two closest opposing faces (volumetric mesh) or two opposing edges (surface
mesh). Source entities for a proximity size function can be faces or volumes. When a volume is used
as the source, all faces of the volume become source faces. The proximity check for all source faces
includes a check of the proximity of edges on the face.
θmax
Sn = 2 ∗ sin ( ) ρmax
2
Eq. 15.31
Here is the larger curvature along two
orthogonal axes. If the computed size is
larger than the size limit, or if a face is flat
(i.e. no curvature), then the specified size
limit is used. ρmax. It is possible for the local
size ,Sn, to be larger than the radiation
from a nearby node would permit. Thus, if
the radiated size of node m at node n, Sm is
less than Sn the radiated size Sm is used.
longest edge into two smaller ones. The sub-facets will then be compared with the target facet and
iteratively refined as needed. Finally this gap value is stored in each facet. This process is optimized
by computing the distance between the bounding box of current facet to the bounding boxes of other
target facets and comparing the distance to the stored minimum distance. If the computed distance
is beyond the stored minimum range of current facet, remaining calculations will be skipped. This
significantly reduces the amount of distance calculations needed. demonstrates that the refined
facets localize the gap influence.
Also, since we are often only concerned about the gap within volumes, if two facets are from faces
that belong to the same volume, we can make use of the relation of the facet normal vectors to avoid
unnecessary comparisons. If their normal vectors, whose positive directions are defined as pointing
outward the volume, are pointing toward each other, which means there is void space in-between,
we can omit the proximity check. Even if no volume is provided as the source entity, we have to check
whether the given source faces belong to the same volume. If they do, internally we still establish the
volume pointer and compare the relation of face facets normal in order to speed up the initialization.
If any face that owns the facet in the facet pair being compared is a dangling face of the volume, the
normal of its facets is ambiguous and full calculations are needed.
Beside face proximity check (i.e. 3D proximity), an option of performing edge proximity check (i.e.
2D proximity) is given. During edge checking, all the edges of a face need to be faceted into line
segments. The distance between each pair of opposing edge segments on the same face will be
computed and the shortest distance used to determine the mesh size on each line segment. To speed
up computation, only the segment pairs within a mutually visible range will be checked. To guarantee
accuracy, a minimum number of edge segments are created (50 in our implementation), especially
for very short edge loops. For both the volume and face proximity controls, the mesh size on the
source entity is determined by:
dgap
Sn =
Ncell
Eq. 15.32
Where dgap is the smallest gap distance associated with a facet, Ncell is the number of cells in the gap
area.
R − Rn
γ=
R n+1 − R n
Eq. 15.35
Here (0 ≤ γ ≤ 1). The actual size, S p at the given point, P, is computed as:
P𝑠 = (1 − γ) ∗ S𝑛 + γ ∗ S𝑛+1
Eq. 15.36
However, the final size is the smaller of the computed size and the defined size limit.
Ss = ∑ Ni Si
i=1
Eq. 15.38
Where is the mesh size at 8 corners of the cell which the point falls into and is the tri-linear
interpolation function for each corner point. Suppose the local coordinates (α β γ) of the given point
inside the background cell can be expressed as
|Slinear − Sdef |
δ= × 100 % < ∆tol
Sdef
Eq. 15.41
Where Δtol is a given error tolerance and is controllable by the user. This seems a reasonable way of
stopping the refining process, but potentially non-linear distributions in other areas of the cell cannot
be caught nicely, especially in the earlier stage of the refining process. This will lead to corruption of
grid generation, unless a constraint of one level difference is applied for neighboring cells. As shown
404
in Figure 15.40, if the cell contains source entities whose smallest size, Bmin , is less than the
minimum size at the 8 corner points of the cell, Cmin, then the centric size ,Slinear, computed from the 8
corners is not accurate, so another refinement to the cell is triggered. In any case, if the maximum
range of the bounding box of a background grid cell becomes smaller than the minimum local size in
the cell (at eight corner nodes or inside the cell), stop refining the cell to avoid over-refinement.
Figure 15.40 Refining Criterion for a Background Cell (A - Actual Size Distribution from Defined Size
Functions, B – Size by Linear Interpolation from 8 corner points, C – Source Entities Possibly with
Smaller Size Inside the Cell)
15.2.3.5 Examples
A few examples are given below to show the application of a single type size function or a
combination of them in the meshing process. High quality meshes have been generated using the
defined size functions with very little effort.
405
(c) Hat-Tail
Figure 15.41 Meshing the Nasty Clown Using a Single Curvature Size Function
3, growth-rate = 1.2, size-limit = 2, three faces (one side face A and two interior dangling faces B and
C) are used as source entities. The size function is attached to the whole volume.
15.2.3.6 Conclusion
A general method of controlling mesh sizes and
radiation for all element types and for different
kinds of geometric features has been created.
The defined size functions have provided rapid
evaluators that would be general for any
meshing algorithm. Local geometric effects
have been radiated to influence size on a more
non-local area. The basic algorithms of
constructing the background grid and creating
fixed, curvature and proximity size functions Figure 15.43 Use of Proximity and Curvature
have been put forward. The criterion of Size Functions in Meshing a Volume with Airfoil
refining the background grid has been shown. Voids
Local mesh size at any point in the domain can
be interpolated from the pre-determined sizes at corner points of the background cell into which the
given point falls. The proposed sizing method has been implemented in Gambit product, and
successfully tested on a wide variety of models with excellent results.
Figure 15.44 Meshing Results Using Composite Size Functions where Three kinds of Size Functions
are Attached to the Volume
408
In the future, other types of size functions can be added to meet specific user needs. For example, we
can add a size function that catches the exterior proximity of the volume if this is desirable. Also, we
can add a size function that uses pre-meshed entities as sources and uses the size of the existing mesh
on the sources to radiate. For some applications it is also beneficial to have the directional size
functions with anisotropic properties. Speed improvement for background grid refinement is also a
focus in the future work.
409
16 Mesh Quality
16.1 Background
Make no mistake about it, mesh quality can have a large influence upon the accuracy (and efficiency)
of a simulations based on the solution of partial differential equations (PDE)'s. Most argue that your
CFD solution is as good as mesh it has. Many factors go into the influence of mesh on accuracy
including the type of physics being simulated, details of the solution to the particular simulation, the
method of discretization, and geometric mesh properties having to do with spacing, curvature,
angles, smoothness, etc,622. The general consensus is that a good quadrilateral mesh would be
formed by two families of orthogonal, or at least nearly orthogonal, curves with a smooth gradation
between a coarse mesh in the far field and a fine mesh near the boundary. The following provisional
definition is accepted as Mesh Quality concerns the characteristics of a mesh that permit a
particular numerical PDE simulation to be efficiently performed, with fidelity to the underlying
physics, and with the accuracy required for the problem.
This description hints at several issues. First, mesh quality depends on the particular calculation
which is undertaken and thus changes if a different calculation is performed. Second, a mesh should
do no harm, i.e., it should not create difficulties for the simulation. As mesh generation methods
evolved to handle complex three dimensional configurations, and the choice of element type
broadened to include not just hexahedra but also tetrahedral and prisms, visual inspection of a mesh
became much more difficult. The task was aided considerably by the advent of computer work-
stations with a powerful graphics capability and the development of good graphics software to view
CFD solutions. Today, of course, it is often possible to undertake a CFD simulation and view the
results on a laptop computer. Despite these developments in computer graphics and visualization
software it is almost impossible to check a mesh with several million points around a complete
aircraft and decide whether the quality and distribution of the mesh elements is acceptable. Even if
this were a feasible option, visual inspection of large meshes is extremely time consuming and is
clearly unacceptable in a design environment where a rapid turnaround is essential and numerous
design variations must be evaluated in a timely manner.
622Patrick M. Knupp, “Remarks on Mesh Quality”, 45 th AIAA Aerospace Sciences Meeting and Exhibit, 7-10
January, 2007, Reno, NV.
410
• Rate of convergence
• Solution accuracy
• Grid Independence result
• CPU time required
Now these days most of grid generation routines have sophisticated software of grid quality which
shows the results graphically. Important metrics such as Volume, Orthogonality, Skewness,
Stretching, Centroids, etc., are available on most grids generation software. Figure 16.1 shows the
mesh quality (Volume, AR, and Stretching) for benchmark test case Turk/Hron.
(a) Volume
(c) Stretching
Figure 16.1 Predicted Mesh Quality (Volume, Aspect Ratio, and Stretch)
411
1. Type 1 checks whether cells have positive volumes and faces that do not intersect each other.
2. Type 2 checks whether interior cell faces match uniquely with one other interior face and
whether boundary cell faces lie on the geometry model of the object being meshed.
3. Type 3 checks whether each surface of the geometry model is completely covered by
boundary cell faces, whether each hard edge of the geometry is covered by edges of boundary
cell faces, and whether the sum of the boundary faces areas matches the actual geometry
surface area.
[Christopher Roy]627 from Virginia Tech showed a counter-intuitive example (at least from the
standpoint of a priori metrics) that the solution of 2D Burger’s equation on an adapted mesh (with
cells of widely varying skew, aspect ratio, and other metrics) has much less Discretization Error (DE)
than the solution on a mesh of perfect squares as seen in Figure 16.2628. From this example alone, it
is clear that metrics based solely on cell geometry are not good indicators of mesh quality as it
pertains to solution accuracy.
Figure 16.2 A simple Demonstration of How a Poor Mesh from a Cell Geometry Perspective
16.2.3.1 CFD++
Metacomp Technologies’ Vinit Gupta629 cited cell skewness and cell size variation as two quality
issues to be aware of for structured grids. In particular, grid refinement across block boundaries in
the far field where gradients are low has a strong, negative impact on convergence. For unstructured
and hybrid meshes, anisotropic tets in the boundary layer and the transition from prisms to tets
outside the boundary layer also can be problematic. Gupta also pointed out two problems associated
with metric computations. Cell volume computations that rely on a decomposition of a cell into tets
are not unique and depend on the manner of decomposition. Therefore, volume (or any measure that
relies on volume) reported by one program may differ from that reported by another. Similarly, face
normal computations for anything but a triangle are not unique and also may differ from program to
program. (This is a scenario can be often encountered when there is a disagreement with a solver
vendor over a cell’s volume that turns out to be the result of different computation methods.)
16.2.3.3 Kestrel
Kestrel, the CFD solver from the CREATE-AV program, was represented by David McDaniel631 from
the University of Alabama at Birmingham. At the start, he made two important statements. First, their
goal is to “do well with the mesh given to us.” (This is similar to Pointwise’s approach to dealing with
CAD geometry – do the absolute best with the geometry provided.) Second, he notes that mixed-
element unstructured meshes (their primary type) are terrible according to traditional mesh metrics,
despite being known to yield accurate results. This same observation is true for adaptive meshes and
meshes distorted by the relative motion of bodies within a mesh (e.g. flaps deflecting, stores
dropping). More significantly, McDaniel notes a “scary” interdependence between solver
discretization and mesh geometry by recalling Mavriplis’ paper on the drag prediction workshop632
in which two extremely similar meshes yielded vastly different results with multiple solvers. To
address mesh quality, Kestrel’s developers have implemented non-dimensional quality metrics that
are both local and global and that are consistent in the sense that 0 always means bad and 1 always
means good. The metrics important to Kestrel are an area-weighted measure of quad face planarity,
an interesting measure of flow alignment with the nearest solid boundary, a least squares gradient
that accounts for the orientation and proximity of neighbor cell centroids, smoothness, spacing and
isotropy. Differing from Dannenhoffer’s result, McDaniel showed a correlation of mesh quality with
solution accuracy with the caution that a well resolved mesh can have poor quality and still produce
a good answer. (In other words, more points always is better; see Figure 16.3).
Figure 16.3 Using Kestrel one can Show a Correlation Between Mesh and Solution Quality
16.2.3.4 STAR-CCM+
Alan Mueller’s633 presentation on CD-adapco’s STAR-CCM+ solver began by pointing out that mesh
quality begins with CAD geometry quality and manifests as either a low quality surface mesh or an
inaccurate representation of the true shape. This echoes Dannenhoffer’s grid validity idea. After
introducing a list of their quality metrics, Mueller makes the following statement, “Results on less
than perfect meshes are essentially the same (drag and lift) as on meshes where considerable
resources were spent to eliminate the poor cells in the mesh.” Here we note that the objective
632 Mavriplis, Dimitri J., “Grid Quality and Resolution Issues from the Drag Prediction Workshop Series”, AIAA
paper 2008-930, Jan. 2008.
633 Alan Mueller, “A CD-Adapco Perspective on Mesh Quality”,CD-Adapco.
414
functions are integrated quantities (drag and lift,) instead of distributed data like pressure profiles.
After all, integrated quantities are the type of engineering data we want to get from CFD. This
insensitivity of accuracy to mesh quality supports Mueller’s position that poor cell quality is a
stability issue. Accordingly, the approach with STAR-CCM+ is to be conservative opt for robustness
over accuracy. Specifically, they are looking for metrics that will result in division by zero in the
solver. Skewness as it effects diffusion flux and linearization is one such example.
1. CFD solver developers believe mesh quality affects convergence much more than accuracy.
Therefore, the solution error due to poor or incomplete convergence cannot be ignored.
2. One researcher was able to show a complete lack of correlation between mesh quality and
solution accuracy. It would be valuable to reproduce this result for other solvers and flow
conditions.
3. Use as many grid points as possible (Dannenhoffer, McDaniel). In many cases, resolution
trumps quality. However, the practical matter of minimizing compute time by using the
minimum number of points (what Thornburg called an optimum mesh) means that quality
still will be important.
4. A priori metrics are valuable to users as an effective confidence check prior to running the
solver. It is important that these metrics account for cell geometry but also the solver’s
numerical algorithm. The implication is that metrics are solver-dependent. A further
implication is that Dannehoffer’s grid validity checks be implemented.
5. There are numerous quality metrics that can be computed, but they are often computed
inconsistently from program to program. Development of a common vocabulary for metrics
would aid portability.
6. Interpreting metrics can be difficult because their actual numerical values are non-intuitive
and stymie development of domain expertise. A metric vocabulary should account for desired
range of result numerical values and the meaning of “bad” and “good.”
Max (li )
ARi =
Min (h i )
Eq. 16.1
Similiarly, for hex and polyhedral cells.
16.2.4.2 Orthogonality
The concept of mesh orthogonality
relates to how close the angles between
adjacent element faces or adjacent
element edges are to some optimal angle Figure 16.4 Concept of Orthogonality in Cells
(for example, 90º for quadrilateral faced
elements and 60º for triangular faces elements). The most relevant measure of mesh orthogonality,
as defined by the CFX-Solver is illustrated in Figure 16.4. It involves the angle between the vector
that joins two mesh (or control volume) nodes (s) and the normal vector for each integration point
415
surface (n) associated with that edge. Significant orthogonality and non-orthogonality are illustrated
at ip1 and ip2, respectively.
16.2.4.3 Skewness
Skew of triangular elements is calculated by finding the minimum angle between the vector from
each node to the opposing mid-side,
and the vector between the two
adjacent mid-sides at each node of
the element as shown in Figure
16.5 (a). The minimum angle found
is subtracted from ninety degrees
and reported as the element‘s skew.
Skew in quads is calculated by
finding the minimum angle between
two lines joining opposite mid-sides (b) Warpage calculation of
(a) Skewness in Triangle a quadrilateral element
of the element. Ninety degrees
minus the minimum angle found is Figure 16.5 Skewness and Warpage
reported as skew of the element.
Maximum of 60-70 skewed elements are accepted in most of the solver beyond this limit solver can
complain about the skewness of the grid.
16.2.4.4 Warpage
This is the amount by which an element (or in the case of solid elements, an element face) deviates
from being planar. Since three points define a plane, this check only applies to quads. The quad is
divided into two trias along its diagonal, and the angle between the trias normal is measured as
shown in Figure 16.5 (b). The maximum angle found between the planes is the warpage of the
element. Warpage in three-dimensional elements is performed in the same fashion on all faces of the
element. Warpage of up to five degrees is generally acceptable.
16.2.4.5 Jacobian
This measures the deviation of an element from its ideal or "perfect" shape, such as a triangle‘s
deviation from equilateral. The Jacobian value
ranges from -1.0 to 1.0, where 1.0 represents
a perfectly shaped element. As the element
becomes more distorted, the Jacobian value
approaches zero. A Jacobian value of less than
zero represents a concave element, which
most analysis codes do not allow. So it is a
good practice to keep the Jacobian of the grid
greater than zero634.
VP = AD.(AB AC) =
(x 4 − x1 )[(y 2 − y1 ) − (z 2 − z1 )(y3 − y1 )] +
(y 4 − y1 )[(z 2 − z1 ) − (x 2 − x1 )(z 3 − z1 )] +
(z 4 − z1 )[(x 2 − x1 ) − (y 2 − y1 )(x 3 − x1 )]
VP
Vtet =
6
Eq. 16.2
Eq. 16.3
where Nf denotes number of face nodes and r is the position vector. The area of each of the triangular
patches are added to get the area of the polygon face
(ri − rf ) (ri +1 − rf ) Nf
A tri = for i = 1, N f A f = A tri
2 i =1
Eq. 16.4
And rN+1 = r1. Centroid of the face is computed in a similar fashion as:
1 Nf
A tri (rn + rn +1 + rf )
rf =
Af
i =1 3
Eq. 16.5
Note that the face centroid rf was initially taken as simply the midpoint of the nodes but it is updated
at the end of the process. In the case of a planar polygon, this updated location reflects the true
centroid of the polygon. However, while not desirable, polygon nodes may be highly no coplanar in
practice. This introduces ambiguity to the centroid location as no unique definition exists based
solely on the knowledge of the node coordinates. In this case, simply iterating over Eq. 16.4 & Eq.
16.5 until convergence provides a reasonable answer. The triangulated polygonal face, even if non-
coplanar, is still attached to each of the vertices defining it as opposed to an approach where one
might fit a planar surface to the vertices. This ensures that, once all the faces of a cell is processed, a
water-tight control volume is achieved. We note once again that regardless of the aforementioned
635Emre Sozer, Christoph Brehm and Cetin C. Kiris,”Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
417
ambiguity for non-coplanar polygons, consistency can be retained if the cells sharing a face use the
same face centroid and area for their reconstruction and flux integration636.
Nc
1 1
rm =
Nc
r
i =1
i Vtet = A tri .(rm − rf )
3
Eq. 16.6
Where Atri and rf for a given face f is obtained previously. This usage of face triangulation around the
previously calculated centroid ensures that a consistent volume is obtained. The integrated volume
and the centroid of the polyhedral cell is then calculated via summation of the contributions from
Nf Ni
1 Nf Ni
V = Vtet , rc = (rf,i + rf,i+1 + rf + rm )Vtet
f =1 i =1 4V f =1 i =1
Eq. 16.7
Where Nf is the number of faces, Ni is the number
of face nodes637.
636 Emre Sozer, Christoph Brehm and Cetin C. Kiris,”Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
637 Emre Sozer, Christoph Brehm and Cetin C. Kiris,”Gradient Calculation Methods on Arbitrary Polyhedral
Unstructured Meshes for Cell-Centered CFD Solvers”, American Institute of Aeronautics and Astronautics.
638 Stefano Paoletti, “ Polyhedral Mesh Optimization Using The Interpolation Tensor”, Adapco, U.S.A., 2002.
418
Figure 16.11 Multi-Connected Non-Convex region with a Clearly Invalid Initial 2D Planar Mesh (left),
Smoothed Planar Mesh (right) – Courtesy of Paoletti
Figure 16.10 Initial Surface Mesh (top) and Figure 16.9 Boundary (top) and Interior
Smoothed surface mesh (bottom) of Polyhedral Mesh
419
16.2.4.9.3 Conclusions
Definitions of both cell and mesh quality have been presented. A method for improving the quality of
polygonal and polyhedral mesh based on the optimization of such quality has been described and
tested in numerical experiments. This approach is similar to the one described in639-640 but tries to
address the limitation of trivalent polyhedral. More work is needed to describe the variety of
objective functions that can be built combining the invariants of the interpolation tensor. It would
also be useful, at least from a theoretical viewpoint, to carry the development of the interpolation
tensor with higher order terms in the Taylor expansion. It may be worth noting that the smoothing
technique here presented can also be used to optimize the position of the nodes for a meshless
method.
639 P.Knupp, ”Achieving Finite Element Mesh Quality via Optimization of the Jacobian Matrix Norm and Associated
Quantities, Part I – A Framework for Surface Mesh Optimization & The Condition Number of the Jacobian Matrix”,
SAND 99-0709J.
640 P.Knupp, ”Achieving Finite Element Mesh Quality via Optimization of the Jacobian Matrix Norm and Associated
Quantities, Part II – A Framework for Volume Mesh Optimization & The Condition Number of the Jacobian Matrix”,
SAND 99-0709J.
641 Abhishek Khare, Ashish Singh, and Kishor Nokam, “Best Practices in Grid Generation for CFD Applications
in CAD data in the form of gaps, overlaps, non-physical protrusions. So we need a lot of cleanup of the
imported CAD geometry. Geometry cleanup is a time consuming step and it requires some
intelligence to decide which feature of geometry has to remove and which feature to retain. Usual
practice is to retain the details that matter for simulation and ensure water tight geometry.
1. Complex geometry: Unstructured grid generation is usually much faster than structured
one. However, if the geometry is only slightly modified from a previously existing geometry
with a structured grid, then structured grid generation can occur very rapidly. For a
particular problem structured grid can take say a man weeks to one man month. On the other
hand unstructured grid will take a man hour to a few days.
2. Accuracy: For simpler problem such as airfoil (single element) or an isolated wing,
structured grids are generally more accurate per unknown than unstructured. However, for
more complex flows, the adaptively facilitated by an unstructured grid may allow more
accurate solutions.
3. Convergence: Structured grid calculations usually take less time than an unstructured grid
calculation because, to date, the existing algorithms are more efficient.
642Abhishek Khare, Ashish Singh, and Kishor Nokam, “Best Practices in Grid Generation for CFD Applications
Using HyperMesh”, Member of Technical Staff Computational Research Lab.
421
643Macro Lanfrit, “Best Practices Guidelines for Handling Automotive External Aerodynamics with Fluent“,
Fluent Deutschland GmbH, Birkenweg 14a, 64295 Darmstadt/Germany, 2005.
422
wall mesh spacing. However, for best results, it might be necessary to use very fine near-wall mesh
spacing (on the order of y+)644.
16.3.8.1 Case Study - Best Practice & Guidelines for Handling Automotive External Mesh Generation
with FLUENT
This document gives a description of a straightforward and reliable way to perform mesh generation
simulations in the field of automotive external using the CFD software package FLUENT ®. Items and
approaches listed below do not claim to be complete nor optimized, they are just recommendations
based on experience with recent comparable studies. Since here we are concerned mainly with mesh
generation aspect of the problem, for other setting (Solver, B.C. Turbulence etc.), readers are
encouraged to consult the work by [Lanfrit646].
• Strategy A (Adaption)
• Strategy B (Boxes)
• Strategy C (Controls)
In this section, the advantages and disadvantages of these three approaches are discussed.
1 Since the FLUENT solver in Version 6.1 has no access to the grid’s original geometry database,
mesh adaption is not useful for improving the geometry resolution of the surface mesh.
2 By using the hanging nodes adaption functionality, numerical instability and maybe
numerical diffusion is introduced by large size gradients of neighboring cells.
3 Adaption needs several manual interventions by the user.
646 Marco Lanfrit, “Best practice guidelines for handling Automotive External Aerodynamics with FLUENT”, Fluent
the Volume Meshing section of this document. This approach is very accurate and avoids the creation
of additional surfaces in prior steps. This strategy is recommended by Fluent.
Figure 16.13 shows an example of the a side view-mirror with two different mesh resolutions. The
surface meshing should result in a high quality, non-uniform triangular surface mesh that resolves
all radii and geometrical details well. In summary:
• Radii and sharp edges should be resolved very accurately, while planar faces can be meshed
relatively “coarse”
• Maximum growth rate of surface elements should be 20%, even from radius to planar face
and vice versa
• Skewness of the surface mesh should be as good as possible, ideally < 0.45
The recommendation for prism layer growth on the vehicle surfaces is First Aspect Ratio: 5,
Geometric growth rate: 1.2, Number of Layers: 5. Growing prisms on the wind tunnel-floor is done
using a uniform growth method. Therefore a First Height, Growth Rate and Number of Layers has to
be specified. The First Height is determined by the average surface element size of the floor under
the car. To ensure the proper boundary layer resolution in this highly affected region, it is important
426
to have a numerically fitted mesh in this particular region, just as on the car surface. Therefore first
height is calculated using the aspect ratio approach given above. The first height should be a fifth of
the average surface element size.
Figure 16.15 Handling Prism Sides using Non-conformal Interfaces – Courtesy of Lanfrit
value is given in terms of volume units. A recommendation for this value is based on the surface
elements size on the bounding box (l-bound), which should be around 100 - 200 mm.
• Make sure that a particle coming from the inlet area and following a path to the front
stagnation point of the car would have to pass through minimum 100 cells. If this region is
under resolved, the pressure coefficient will go far beyond 1 and spoil the overall solution.
• Check that the largest cells within the flow domain are smaller than those attached to the
bounding box (for pure tetrahedral meshes)
• Check that Quality of cells is below 0.9 for the whole domain. It is recommended to have a
quality of below 0.85 for prism elements on the car surface. Use T-Grid’s Mesh Repair
functionalities to fix local quality problems.
647 Blazek, J, “Computational Fluid Dynamics: Principles and Applications”, Elsevier Science Publication, 2005.
648 Sven Perzon, “On blockage effects in wind tunnel - A CFD Study”, SAE - 2001-01-0705, 2001.
429
17 Appendix A
A.1 Computer Code for a Transfinite Interpolation
This subroutine is based on a transfinite interpolation with a Lagrangian blending function. The
following section describes the subroutine arguments.
Nomenclature
II(i), JJ(j) : This array stores the locations of known grid lines in i- and j-directions,
respectively (1 for known grid lines).
Example: Consider surface IV in Figure 17.1. In this case, five grid lines are known: two lines at GH,
two lines at GJLN, and one line at NO. The size of the grid is 95 in the I-direction and 50 in the J-
direction. Point G is at (70, 15) and point O is at (95, 25).
430
431
432
433