0% found this document useful (0 votes)
47 views

Advanced Computational Fluid Dynamics: Jacek Rokicki

Uploaded by

Gaurav Mehta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Advanced Computational Fluid Dynamics: Jacek Rokicki

Uploaded by

Gaurav Mehta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Advanced Computational

Fluid Dynamics

Jacek Rokicki

Copyright © 2014/2016, Jacek Rokicki


Advanced Computational Fluid Dynamics Jacek Rokicki

Table of content
1. Introduction..................................................................................................................................... 3
2. Navier-Stokes equations.................................................................................................................. 4
3. Euler equations................................................................................................................................ 5
4. Euler equations in 1D ...................................................................................................................... 5
5. Model problems .............................................................................................................................. 6
6. Scalar products and norms .............................................................................................................. 7
Scalar product...................................................................................................................................... 7
Vector norms ....................................................................................................................................... 8
7. Algebraic eigenproblem ................................................................................................................ 10
Main properties of the eigenvalues and eigenvectors...................................................................... 11
Similar matrices ................................................................................................................................. 13
Jordan matrix (Jordan normal form of a matrix ) ........................................................................... 13
Power method to calculate eigenvalues ........................................................................................... 14
Solving the linear equation having the eigenvalues and eigenvectors of the matrix ....................... 15
8. Eigenvectors (eigenfunctions) and eigenvalues of selected matrices (operators) ....................... 16
Eigenfunctions and eigenvalues of the 1D BVP operator ................................................................. 16
Eigenvalues and eigenvectors of the discrete 1D BVP operator ....................................................... 17
Eigenfunctions and eigenvalues of the 2D Poisson operator............................................................ 18
Eigenfunctions and eigenvalues of the discrete 2D Poisson operator .............................................. 19
9. Vector and matrix norms revisited................................................................................................ 20
Vector norms ................................................................................................................................. 20
Matrix norms ................................................................................................................................. 20
10. Iterative methods to solve large linear systems ....................................................................... 22
The Jacobi iterative method .............................................................................................................. 22
The Gauss-Seidel iterative method ................................................................................................... 24
Error analysis of consecutive iterations of the Jacobi iterative algorithm ........................................ 25
Error analysis of Jacobi iterations with underrelaxation................................................................... 26
11. Multigrid method ...................................................................................................................... 28
12. Matrix functions ........................................................................................................................ 30
Matrix linear ODE .......................................................................................................................... 31
13. Nonlinear equations .................................................................................................................. 32
The Method of Frozen Coefficients ................................................................................................... 33
Newton Method (Quasi-Linearisation).............................................................................................. 34

1
Advanced Computational Fluid Dynamics Jacek Rokicki

14. Model scalar equations in 1D .................................................................................................... 37


Advection equation (model hyperbolic equation) ............................................................................ 37
Diffusion equation (model parabolic equation) ................................................................................ 38
Advection-Diffusion equation ........................................................................................................... 39
Telegraph equation ........................................................................................................................... 39
15. Multidimensional first order PDE .............................................................................................. 41
16. Discretisation of the scalar advection equation ........................................................................ 45
Finite difference formulas ................................................................................................................. 45
Explicit Euler Formula ........................................................................................................................ 45
Explicit one sided formula ................................................................................................................. 46
The explicit Lax-Friedrichs formula ................................................................................................... 46
Implicit formulas................................................................................................................................ 47
Lax-Wendroff formula ....................................................................................................................... 47
Beam-Warming formula .................................................................................................................... 47
17. Discretisation of the multidimensional hyperbolic equation.................................................... 49
18. Nonlinear hyperbolic equations ................................................................................................ 51
The method of characteristics ........................................................................................................... 51
Linear equation with constant coefficient - a(u, x)=c ................................................................... 52
Linear equation with variable coefficient - a(u, x)=x .................................................................... 53
Nonlinear equation - a(u, x)=u ...................................................................................................... 54
Weak solutions of the nonlinear hyperbolic equations .................................................................... 55
Propagation of discontinuities (scalar case)...................................................................................... 56
Which discontinuities are permanent? ............................................................................................. 58
Conservative vs. quasilinear formulation .......................................................................................... 59
Propagation of discontinuities (vector case) ..................................................................................... 60
19. Godunov's order barrier theorem ............................................................................................. 63
20. Annex 1 ...................................................................................................................................... 64
21. Annex 2 ...................................................................................................................................... 65
22. Annex 3 ...................................................................................................................................... 66

2
Advanced Computational Fluid Dynamics Jacek Rokicki

1. Introduction

The content of this book covers lectures in Advanced Computational Fluid Dynamics held at the
Faculty of Power and Aeronautical Engineering since 2006. The lectures evolved over the last 8 years
broadening in scope and involving newer topics.
The original content was very much inspired by the works of Randall J. LeVeque and in particular by
his book Numerical Methods for Conservation Laws1.
The present book is organized by starting with the general equation of Fluid Mechanics and then by
analysis of various model problems, which help to understand the complexity of the
multidimensional, nonlinear Navier-Stokes, Euler equations and their discretisations. Various topics
in numerical analysis and algebra (notably the algebraic eigenproblem) are also introduced to make
the exposition complete for the reader. Certain topics are recalled from the more elementary course
in Computational Fluid Dynamics held for the undergraduate students.

Warsaw, 2014-2016

1
Randall J. LeVeque, Numerical Methods for Conservation Laws, 1990 Birkhäuser, ISBN 978-3-7643-2464-3.

3
Advanced Computational Fluid Dynamics Jacek Rokicki

2. Navier-Stokes equations
The Navier-Stokes equation for compressible medium are best presented in the unified manner
which underlines their conservative structure.

+ Div = Div ,∇ (1)

where:

= ∈ ℝ , = ∈ ℝ , = !
+ "# × %
+" !
(2) (2)
×

In the above stands for the composite conservative variable, is the density, = & , , '!
is a total energy per unit mass, " stands for pressure, while ( = + is
)
*
denotes a velocity vector,

the total enthalpy per unit mass. The and ,∇ stand for the convective and viscous
fluxes respectively. The viscous flux can be expressed as:

0
,∇ = , × %
!
, − .!
(3)
× ×

where . stands for a heat flux, while , denotes the stress tensor. Both these quantities in Fluid
Mechanics (and especially the stress tensor) can be defined via very different formulas, in particular
when modelling of turbulence is attempted, nevertheless for simple Newtonian, linear fluid they are

2
defined/calculated as:

, = / 0− ∇∙ # + ∇ !
+∇ 4
×
3 ×

. = −5 ∇6
(4)

where / and 5 stand for coefficients of dynamic viscosity and thermal conductivity respectively. It is

is very small and equals ~10= >@ and ~10=A


: ? B?
* C
good to remember that for air and water

respectively. This is the reason why in the Navier-Stokes equation the convective flux plays, in a
sense, more important role than the viscous flux (at least for higher Reynolds numbers).
= D6 (where 6
)
*
In addition, for perfect gas, the equation of state is assumed in the usual form:

stands for temperature, while D is an ideal gas constant).


The tensor divergence operator Div present in (1) can be expressed for clarity in an extended form
as:

4
Advanced Computational Fluid Dynamics Jacek Rokicki

div
G M
"
F div + L
F HL
F "L
Div = F div +
IL
F L
(5)

Fdiv "L
+
F JL
E div ( K ×N

Where div denotes a usual scalar divergence operator acting on a vector functions.

The Euler equations are obtained from Navier-Stokes equation (1) by assuming that / ≡ 0 and 5 ≡ 0
3. Euler equations

- the fluid is inviscid and does not conduct heat. In this case the equations are significantly simplified:

+ Div =0 (6)

And in slightly different notation they assume the form:


P!
= = P ∈ ℝ , = =N
PP! + "# ×
Q (P!
(7)
×

" 1
Where for perfect gas:

(= + = Q+"

P! P
" = R − 1 SQ − T , R = U) /UW
2
(8)

RQ P! P
(= − R−1
2 X

4. Euler equations in 1D
For 1D cases Euler equations are further simplified, to:

+ =0
H
(9)

P
with:

PX
= = P ∈ ℝ , =Y + "Z
Q
(10)
(P ×N

and:
PX
" = R − 1 SQ − T , R = U) /UW
2
(11)

5
Advanced Computational Fluid Dynamics Jacek Rokicki

RQ PX
(= − R−1
2 X

5. Model problems
In order to better understand the principles of discretisation, various model problems will be
considered here, including:

1. 1D elliptic problem
2. 2D Poisson equation
3. Advection equation
4. Advection-diffusion equation
5. 1D parabolic problem
6. Telegraph equation
7. Multidimensional hyperbolic problem
8. Nonlinear advection problem
9. Burgers equation
The analysis of these discretisations will be made possible, by applying theoretical tools mainly
related to the algebraic eigenvalue problem.

6
Advanced Computational Fluid Dynamics Jacek Rokicki

6. Scalar products and norms

over ℝ or ℂ, the elements of this space will be denoted by


Scalar product
Let’s consider a vector space
, , ∈ , while scalars are denoted as ^, _, R, ∈ ℝ or ℂ:

Definition
The scalar product is defined as a two argument function
∙ ,∙ : × → ℝ or ℂ (12)
with the following axiomatic conditions:
(i) , ≥ 0, , = 0 ⇔ ≡ 0
(ii) ^ , =^ ,
, = dddddddd
,
(13)
(iii)
(iv) ^ + _ , =^ , +_ ,

The most popular formula for the finite dimension space ℂe is:
Examples:

, ≝ g ih j ≡ l
(14)
jkN

But other forms are also possible:


e

, ∗ ≝ g _jX ∙ ih j (15)
jkN
e e

, ≝ l
n ≡ g g ih opj
(16)
∗∗ j
pkN jkN

where n Is positive definite (∀ ≠ 0, l


n > 0 Hermitian matrix (n = nl ≡ dddd
n!

If the matrix is Hermitian, then n = nl ⇔ n , = ,n


Remark:
ℂ ℂ

∀ , , n , = n l
= l
n = l
n = ,n
Proof:
ℂ ℂ (17)
or:

∀ , , n , ℝ = n !
= !
n = !
n = ,n ℝ (18)
This is the rationale for defining the Hermitian/symmetric matrices and subsequently operators via
(17), (18) rather than by the definition using the matrix elements. The Hermitian operators are
further on called self-adjoint.

The examples of scalar products for operators are given below:

7
Advanced Computational Fluid Dynamics Jacek Rokicki

, t? u ≝v H H wΩ,
u
w w
, ≝v H H + wΩ,
(19)
ly u
u wH wH
The important property of each scalar product is now recalled, named a Cauchy-Schwarz inequality:

∀ , | , |≤ , , , (20)
As a result we are always able to define the angle between vectors and for every real valued
scalar product, as:

,
, ∈ |X Ω , cos•€ , • ≝
‚ , ‚ ,
(21)

This formula allows in turn to infer the value of the angle of €.

H = H, H = H X , while Ω = 〈0, 1〉. We will calculate the angle € between


Example:
Suppose now that
and :

…† H wH
N
√15

cos € = = = ⟹ € ≈ 14.48°
4
‡…† H X wH ‡…† H ˆ wH ‡N ‡N
N N
(22)

Exercise: What is the angle between these functions in the (N scalar product ?

are called orthogonal ⇔ , =0


Definition:
Vectors and

The notion of orthogonality is very important and will be used extensively in the further exposition.

The space with the scalar product is called an inner-product space or unitary space.

Vector norms
The norm of a vector, is defined in a following axiomatic way as a non-negative function fulfilling
three conditions ( , ∈ , ^ ∈ ℝ):
‖ ∙ ‖ ∶ ≝ ℝ“ ∪ •0–

(i) ‖ ‖ ≥ 0, ‖ ‖ = 0 ⇔ ≡ 0
(ii) ‖^ ‖ = |^|‖ ‖ (23)

(iii) ‖ + ‖≤‖ ‖+‖ ‖

Examples (Hölder norms):

e
∈ ℂe lub ℝe , ‖ ‖) = šg
)
› j›
œ

jkN
(24)

In particular:

8
Advanced Computational Fluid Dynamics Jacek Rokicki

e e
‖ ‖N = g › j ›, ‖ ‖X = šg › j › , ‖ ‖• = max› j ›
X

jkN jkN j
(25)

It is important to notice that each scalar product generates a norm:

‖ ‖≝‚ , (26)

norm is not generated by any scalar product. Out of all Hölder norms only " = 2 corresponds to a
thus each unitary space is also a normed space. This is not true the other way round, usually the

scalar product. For functions (infinite dimensional functional spaces) the norms are defined in a
manner analogous to Hölder vector norms:

• For integrable functions: − ‖ ‖ty u ≝ …u| H | wH


• For square integrable functions: − ‖ ‖t? u ≝ …u| H |X wH
• For continuous functions: − ‖ ‖ u = max¡∈u | H | (27)
• For functions with square − ‖ ‖l y u ≝ …u| H |X +
integrable first derivative: | ′ H |X wH

Remark:
Each norm generates a metric (distance function) via the formula:

, ≝‖ − ‖ (28)
Thus every normed space is also a metric space (but again not vice versa). This is illustrated in the
graph below:

9
Advanced Computational Fluid Dynamics Jacek Rokicki

7. Algebraic eigenproblem
For the purpose of further analysis we recall now the most important features of the algebraic (and
also operator) eigenproblems.

Consider real or complex values matrices n ∈ ℝe×e or n ∈ ℂe×e . The algebraic eigenvalue problem
consists in finding nonzero ∈ ℝe or ℂe , such that:

n =5 (29)
where 5 ∈ ℂ denotes eigenvalue corresponding to the eigenvector .

Interpretation: We seek the “direction” , which remains unchanged after is multiplied by n,


(only the length is modified).

n − 5# = 0 ⇔ det n − 5# = 0
Properties:

⇕ (30)
Ue 5e + Ue=N 5e=N + ⋯ + UN 5N + U† = 0
The last formula forms characteristic polynomial, which is obtained by calculation of the determinant
above, indeed U† = det n .
Therefore the singular matrix has at least one zero eigenvalue, while all eigenvalues of non-singular
matrix are non-zero. From the properties of polynomials we see that the matrix has always §
eigenvalues (not necessarily distinct and not always real valued, even for the real matrices).
The matrices n and n! have the same eigenvalues The real matrix n ∈ ℝe×e may have complex
eigenvalues which are then always pairwise conjugated.
There exist no-finite algorithm to find the eigenvalues of n, as there exist no finite algorithm to find
the roots of the polynomial of sufficiently high degree (characteristic polynomial in this case).
Out of numerical reasons the coefficients of the characteristic polynomial should never be directly
evaluated (as they accumulate all round-off errors).

Examples
1. The case when the matrix is the identity:

n=#⇔# =5
5N = 5X = ⋯ = 5e=N = 5e = 1
(31)

The eigenvector can be quite arbitrary, but in particular, it can be a versor of one of the axes:

0
G⋮M
F L
F 0L
) =¨) = F1L ⇐ position ", " = 1,2, … , §
F 0L
(32)

F⋮L
E 0K

10
Advanced Computational Fluid Dynamics Jacek Rokicki

2. The case of the diagonal matrix:

wN 0 ⋯ 0 0
G M
0 wX ⋯ 0 0
F L
n = - = diag•wj • = F ⋯ ⋯ ⋱ L⇔- =5
F0 we=N 0L
E0 0 0 we K
(33)

5N = wN , 5X = wX , … , 5e = we
The eigenvectors in this case are the same as previously (without however the possibility to choose
the eigenvectors in a different way)

0
G⋮M
F L
F0L
) =¨) = F1L ⇐ position "
F0L
(34)

F⋮L
E0K
3. The case of the Jordan block

U 1 ⋯ 0 0
G0 U ⋯ 0 0M
F L
n = ° = F⋯ ⋯ ⋱ 1 L⇔° =5
F0 U 1L
E0 0 0 UK
w¨ ° − 5# = U − 5 e
(35)


5N = 5X = ⋯ = 5e=N = 5e = U

eigenvector. To verify this we consider the sequence of equations ° = U :


We have again the case of multiple eigenvalue. However in this case the matrix has only one

U N + X =U N ⟹ X =0
U X + =U X ⟹ =0
… … (36)
U e=N + e =U e=N ⟹ e=N =0
U e =U e ⟹ e =0
Therefore the only eigenvector has the form:

1
0
=¨N =Y Z
N

(37)
0
Summary: We have shown that matrices with multiple eigenvalues can have different number of
eigenvectors.

is an eigenvector, then also ^ ∙


Main properties of the eigenvalues and eigenvectors
1. If is an eigenvector.

11
Advanced Computational Fluid Dynamics Jacek Rokicki

N ,…, B are eigenvectors corresponding to the eigenvalue 5, then also =


∑jkN ^j ∙ j is an eigenvector corresponding to the same eigenvalue. To verify this we
2. If
B

check: ∀², n j = 5 j ⟹ n = ∑B jkN ^j ∙ n j = ∑jkN ^j ∙ 5 j = 5 ∑jkN ^j ∙


B B
j =5 ,
which shows that is indeed an eigenvalue.
3. Eigenvectors corresponding to different eigenvalues are linearly independent.

n N = 5N N and 5N ≠ 5X ⟹ N and X

n = 5N
(38)
X X are linearly independent

N and X
dependent, i.e., N =β X :
To verify this we present partial proof. Assume that are linearly

n N = 5N N ⟹ 5N − 5X N = 0 ⟹
n_ = 5X _ 5N = 5X contradiction
(39)
N N

4. If n has § distinct eigenvalues |5N | < |5X | < ⋯ < |5e=N | < 5e , then the corresponding
eigenvectors N , … , e form a basis in ℝe (eigenvectors are linearly independent).
5. All eigenvalues of Hermitian matrix (n = nl ≡ n dddd
! ) are real. To prove this lets take an

Hermitian implies that n , = , nl = , n :


arbitrary eigenvector and calculate the suitable scalar product. The fact that matrix is

n , = 5 , =5 ,
⇒ 5 = 5̅
n , = ,n = ,5 = 5̅ ,
(40)

As , ≠ 0 this implies that the eigenvalue is always real.

prove this lets consider two eigenvectors and corresponding to the different eigenvalues
6. Eigenvectors corresponding to distinct eigenvalues of Hermitian matrix are orthogonal. To

5 and / respectively.

n = nl , n = 5 , n = 5 , 5 ≠ /
n , = 5 , =5 ,
n , = ,n = ,/ =/ , (41)

5−/ , = 0 ⇒ , = 0
We have shown that the eigenvectors are indeed orthogonal.

7. The previous theorem can be further extended, as it appears that actually all eigenvectors of

of Hermitian matrix form a basis in ℂe .


Hermitian matrix are orthogonal (be the eigenvalues distinct or not). In fact the eigenvectors

8. The eigenvalues of symmetric (Hermitian) positive-definite matrix are positive. A symmetric

we have always n , = ! n > 0 ¸¹ n , = l n > 0 . To show that the


(Hermitian) positive-definite real matrix is defined as such that for arbitrary non-zero vector

eigenvalues are positive consider now the eigenvector corresponding to the eigenvalue 5.

0< n , =5 , ⇒ 5 > 0 as always , >0 (42)

12
Advanced Computational Fluid Dynamics Jacek Rokicki

It is interesting to note that for arbitrary matrix n the matrix nl n is symmetric (Hermitian)
and positive-definite, as

nl n , = l l
n n = n ,n = ‖n ‖XX > 0 (43)
thus the eigenvalues of nl n are always real and positive.

Def. Matrices ¼ and n are similar if for some invertible matrix ½:


Similar matrices

¼ = ½ =N n½ (44)
Properties:

1. Similar matrices have the same eigenvalues:

det ¼ − 5# = det ½ =N n½ − 5½ =N #½ = det ½ =N n − 5# ½ = det n − 5# (45)


2. The eigenvectors of the similar matrices are however different:

n = 5 ⇒ ½ =N n = 5½ =N ⇒
½ =N n½ ½ =N = 5 ½ =N = ¼ = 5 , = ½ =N
(46)

Jordan matrix (Jordan normal form of a matrix )

Any § × § square matrix n is similar to a Jordan matrix ° (unique up to a permutation of blocks):


Theorem:

∀n ∃½, w¨ ½ ≠ 0, ° = ½ =N n½
°N 0 ⋯ 0 0
G0 °X ⋯ 0 0M
F L
° = F⋯ ⋯ ⋱ L , P ≤ §
F0 °B=N 0L
E0 0 0 °B K
5) 1 ⋯ 0 0
(47)
G M
F0 5) ⋯ 0 0L
°) = F ⋯ ⋯ ⋱ 1 L §N + §X + ⋯ + §B = §
F0 5) 1L
F L
E0 0 0 5) Ke
œ ×eœ

1. The eigenvalues 5) and 5¿ from two different blocks are not necessarily distinct.
Properties:

2. Each block corresponds to linearly independent eigenvector, thus matrix n has P linearly
independent eigenvectors (the eigenvectors of ° are À¨ N , ¨ ey “N , ¨ e? “N , ¨ e>Áy “N Â)
3. If P = § we call matrix n diagonalizable as ° is strictly diagonal, and all blocs are 1 × 1 :

13
Advanced Computational Fluid Dynamics Jacek Rokicki

5N 0 ⋯ 0 0
G M
0 5X ⋯ 0 0
F L
°=Λ=F ⋯ ⋯ ⋱ L = diag•5) •
F0 5e=N 0L
(48)

E0 0 0 5e K
Hermitian matrices n = nl are diagonalisable
Normal matrices nnl = nl n are diagonalizable
4.
5.
6. Many other matrices are diagonalizable
7. If A is diagonalizable, then:

Λ = ½ =N n½ ⇒ n½ = Λ½ (49)
Which means that eigenvectors of n are the columns of ½.

The Jordan theorem does not provide an aid in computations (it is not constructive), however it
characterises all possible configurations of eigenvalues and eigenvectors the matrix can have. It also
characterises an important class of diagonalizable matrices.

Power method to calculate eigenvalues

matrix n, •n N = 5N N , … , n ) = 5) ) , … n e = 5e e , • with eigenvalues sorted in


We will present a simplest method to calculate the largest eigenvalue of real and diagonalisable

decreasing order |5N | > |5X | ≥ |5 | ≥ ⋯ |5e |, and the first eigenvalue being separated, i.e., larger
than all others. We assume also that the eigenvectors ) , " = 1, … , § form the basis in ℝe .

The algorithm:
† arbitrary vector (of random elements)

Ä =n Ä=N , Å = 1,2, …
•n , •
(50)
lim = and 5N =
N N
• , •
Ä N
Ä→•
N N

e
Proof of convergence:

† = g ^) ) representation in the basis of eigenvectors


)kN
e e

N =n † = g ^) n ) = g ^) 5) )
)kN )kN


e e

=n = g ^) 5Ä=N
) n = g ^) 5Ä) =
(51)
Ä Ä=N ) )
)kN )kN

e e
5)
Ä
= ^N 5NÄ + g ^) 5Ä) = 5NÄ ^N + g ^) S T %→
N ) N
5N )
)kX )kX

ÈÉÉÊ 5NÄ ^N N
Ä→•

14
Advanced Computational Fluid Dynamics Jacek Rokicki

Ìœ Ä
With the sum in square bracket vanishing as Ë Ì Í ÈÉÉÊ 0
y Ä→•

In practical computations Ä has to be normalised at each iteration in order to avoid its exponential
growth (which is not dangerous in theoretical considerations).

Solving the linear equation having the eigenvalues and eigenvectors of the

Suppose that we have Hermitian matrix n = nl with known eigenvalues and eigenvectors
matrix

n ) = 5) ) , " = 1, 2, … , §, and with the eigenvectors which are orthonormal • ) , ¿ • = Î)¿

Suppose now that we want to solve the linear equation n = Ï. The eigenvectors form an
orthonormal basis in ℝe and as a result the solution can be expressed as = ∑e)kN ^) ) , where
coefficients ^) are initially unknown.

We have n = ∑e)kN ^) 5) ) and •n , ¿ • = ∑e)kN ^) 5) • ) , ¿ • = ^¿ 5¿ , and finally:


e
•Ï, • •Ï, •
^¿ = and the solution: = g
¿ )
5¿ 5) ) (52)
)kN

This is a very simple algorithm, nevertheless not very practical as eigenvectors and eigenvalues are
much more difficult to obtain in contrast to the solution of the linear equations system by some
standard method. However in rare cases, when eigenvectors and eigenvalues are indeed known for
free (as is the case for discrete Poisson problem) this forms the basis of extremely efficient numerical
procedure.

15
Advanced Computational Fluid Dynamics Jacek Rokicki

8. Eigenvectors (eigenfunctions) and eigenvalues of selected


matrices (operators)

Suppose that we have finite dimensional operator ne×e connected to the linear equation ne×e =
Eigenfunctions and eigenvalues of the 1D BVP operator

Ï and the infinite dimensional operator | connected to 1D Boundary Value Problem (BVP):

wX
| = ∈ = • ∈ Ð X 〈0, 1〉: 0 = 1 = 0–
wH X
(53)

Operator | is connected to the simple BVP below:

wX
Ñ =Ï , ≝ …†
N
∙ wH
wH X
H=0 = H=1 =0
with scalar product: (54)

We seek non-zero ) such that:

| ) = 5) ) ) ∈ (55)
We distinguish now two separate cases 5 = /X positive, and 5 = −Ò X negative:

Positive 5 = /X Negative 5 = −Ò X
wX wX
Ñ = /X Ñ wH X = −Ò
X
wH X
(56)

0 = 1 =0 0 = 1 =0
The general exact solution contains two constants ÐN and ÐX, which have to be determined such that
the boundary condition is fulfilled:

H = ÐN ¨ :¡ +ÐX ¨ =:¡ H = ÐN cos ÒH +ÐX sin ÒH


0 = ÐN + ÐX = 0 ⇒ ÐN = −ÐX 0 = ÐN = 0 ⇒ ÐN = 0
1 = ÐN &¨ : −¨ =: '
= 0 ⇒ ÐN = ÐX = 0 1 = ÐX sin Ò = 0 ⇒ Ò = Åπ
as ¨ : ≠ ¨ =: for / > 0 Å = 1, 2, …
(57)

No solution for positive 5 = /X The constant ÐX is arbitrary.


Therefore the eigenvalues 5Ä and the eigenfunctions Ä of the original operator, are:

5Ä = −k X Õ X (58)

Ä = sin ÅÕH , Å = 1, 2, … (59)


We have obtained infinite sequence of eigenvalues and eigenvectors. This means that the operator
has infinite dimension, which is typical for continuous operators and functional spaces.

All eigenvalues are real, which is connected to the fact that the operator | is “selfadjoint” (symmetric
in the previous nomenclature).

| , = , | , , ∈ (60)
This is easy to show considering the definition of the scalar product (54) and taking advantage of the
Green theorem:

16
Advanced Computational Fluid Dynamics Jacek Rokicki

N
wX w N
w w
| , =v ∙ wH = 0 |N
4 − v ∙ wH =
† wH X wH †
† wH wH

w w
N X
=0 |N† 4 − v ∙ wH = ,|
(61)

wH † wH
X

Both terms in square brackets vanish as both functions , ∈


interval 〈0, 1〉.
(53) vanish at the endpoints of the

wX † =0
Eigenvalues and eigenvectors of the discrete 1D BVP operator

Ñ =Ï ⟹ − 2 j+
wH X |Ö = × j=N j“N
= Ïj ,
H=0 = H=1 =0 ℎX
(1)
e“N
1
Hj = 0 + ²ℎ, ² = 1, … , §, ℎ =
§+1
(2)

The corresponding matrix ne×e limited to ² = 1, … , § can be expressed as:

−2 1 ÏN
G⋮M
N
G M G⋮ M
1 F 1 −2 1 L F L F L
nÖ = X F ⋱ ⋱ ⋱ L , = F j L , ÏÖ = F Ïj L
ℎ Ö
F 1 −2 1 L F⋮ L F⋮L
(62)

E 1 −2K E eK EÏe K
The equation system is then expressed as:

nÖ Ö = Ï_ℎ (63)
We seek now the eigenvalues and eigenvectors of nÖ . Through analogy with the continuous case we

functions ¨ pÚÄ¡ = cos ÅÕH + Û sin ÅÕH , Û = √−1. Below we drop the lower index ℎ to shorten the
choose the eigenvectors in the form of complex exponents consisting of both cosine and sine

notation. The k -th eigenvector Ä can be then expressed as:

¨ pÄÚ¡y ¨ pÄÚÖ
G M G M
F pÄÚ¡⋮ L F ⋮ L
Ä = F¨ Ü = ¨ pÄÚjÖ
L F L
F ⋮ L F ⋮ L
(64)

E¨ pÄÚ¡Ý K E¨ pÄÚeÖ K
This was our guess, and we have to prove now, that this are indeed eigenvectors of n

−2 + ¨ pÄÚ j=N Ö
− 2¨ pÄÚjÖ + ¨ pÄÚ j“N Ö
n = = =
Ä j=N Ä j Ä j“N
Ä
ℎX ℎX
(65)

pÄÚÖ pÄÚÖ X
¨ =pÄÚÖ − 2 + ¨ pÄÚÖ ¨= X −¨ X ÅℎÕ X
= ¨ pÄÚjÖ = % = 0−2Û sin Þ ß4 =
ℎX Ä
ℎ Ä
2 (66)

= Ä 5Ä
We have obtained the proof, and the eigenvalues of n are listed below:

17
Advanced Computational Fluid Dynamics Jacek Rokicki

ÅℎÕ
4 sinX Ë Í
5Ä = − 2
ℎX
(67)

For small values of Å ≪ § and large §

ÅℎÕ X
4Ë Í
5Ä = − 2 = −Å X Õ X
ℎX
(68)

Which completely agrees with the first eigenvalues (58) of the continuous case. The ability to mimic
the spectral properties of the continuous operator, by the discrete one, is an important property in
numerical analysis.

The graph below shows both −5Ä áeâpeãáãC and −5Ä äpC åæâæ for different values of H = Å X Õ X . The
number of intervals in the discrete formulation is § = 20.

700

600

500

400
continuous
discrete (69)
300

200

100

0
0 100 200 300 400 500 600 700
x

The value of H = 400 corresponds to roughly Å = 6. Therefore 6 first eigenvalues are almost
identical, which makes 30% of all eigenvalues of the discrete operator. From this we may infer that

error for much shorter waves with the wavelength close to the step-size ℎ.
the finite dimensional operator will correctly resolve the longer waves, but will introduce significant

Eigenfunctions and eigenvalues of the 2D Poisson operator

X X
L ≡ X + X
Ñ H I H, I ∈ Ω = 〈0,1〉 × 〈0,1〉
|êu = 0
(70)

L ≡ L¡ + |ë
Through analogy with the 1D problem the eigenfunctions of L are assumed as:
L ),¿ = −Õ X "X + . X ),¿ (71)
and the eigenvalues are:
5 ),¿ = −Õ X "X + . X (72)

18
Advanced Computational Fluid Dynamics Jacek Rokicki

Eigenfunctions and eigenvalues of the discrete 2D Poisson operator

− 2 p,j + − 2 p,j +
ïLÖ |•¡ð ,ëÜ• ≡ +
p=N,j p“N,j p,j=N p,j“N

í ℎX ℎX
p,j › =0
î
•¡ð ,ëÜ •∈êu

í 1
Hp = Ûℎ, Ij = ²ℎ, ℎ= , Û, ² = 1, … , §, ñ = §X
(73)
ì §+1
•Hp , Ij • ∈ Ω = ò0, 1ó × ò0, 1ó
The corresponding matrix has the form:
ö ÷
G M
F ÷ ö ÷ L
ôõ×õ =F ⋱ ⋱ ⋱ L
F ÷ ö ÷L
(74)

E ÷ öK
where:

−4 1 1 0
G M G M
N F
1 −4 1 L N F
0 1 0 L
öe×e = Ö? F , ÷ = L = ø
N
⋱ ⋱ ⋱ L e×e Ö? F
⋱ ⋱ ⋱ Ö ? e×e
F 1 −4 1 L F 0 1 0L
(75)

E 1 −4K E 0 1K
In analogy to 1D problem the eigenvectors are expressed as:
sin Õℎ" sin Õℎ.
G ⋯ M
F L
),¿ = F sin Õℎ"Û sin Õℎ.² L∈ℝ
õke∙e

F ⋯ L
(76)

Esin Õℎ"§ sin Õℎ.§ K


The operator LÖ can be expressed as LÖ = LÖ¡ + LÖë , where both partial operators act
independently on ),¿ :
"ℎÕ
4 sinX Ë 2 Í
|Ö¡ =−
),¿
ℎX ),¿

.ℎÕ
4 sinX Ë 2 Í
(77)

|Öë =−
),¿
ℎX ),¿

and therefore:
4 "ℎÕ .ℎÕ
5 =− 0sinX
Þ ß + sinX
Þ ß4
),¿
ℎX 2 2
(78)

19
Advanced Computational Fluid Dynamics Jacek Rokicki

9. Vector and matrix norms revisited

We recall now the properties of vector norms, which have the following properties ( , ∈ , ^ ∈
Vector norms

ℝ):
‖ ∙ ‖ ∶ ≝ ℝ“ ∪ •0–

(iv) ‖ ‖ ≥ 0, ‖ ‖ = 0 ⇔ ≡ 0
(v) ‖^ ‖ = |^|‖ ‖ (79)

(vi) ‖ + ‖≤‖ ‖+‖ ‖

Examples (Hölder norms):


e
∈ ℂe lub ℝe , ‖ ‖) = šg
)
› j›
œ

jkN
(80)

In particular:
e e
‖ ‖N = g › j ›, ‖ ‖X = šg › j › , ‖ ‖• = max› j ›
X

jkN jkN j
(81)

Matrix norms
Matrix (operator) norms have the following properties:

(i) ‖n‖ ≥ 0, ‖n‖ = 0 ⇔ n ≡ 0


(ii) ‖^n‖ = |^|‖n‖
‖n + ¼‖ ≤ ‖n‖ + ‖¼‖
(82)
(iii)
(iv) ‖n ∙ ¼‖ ≤ ‖n‖ ∙ ‖¼‖ (additional property)
Important class of matrix norms is generated (induced) by the vector norms.
Matrix norm ‖ ∙ ‖ù is induced by the vector norm ‖ ∙ ‖ú , when:
‖n ‖ú
‖ n ‖ù ≝ sup = sup ‖n ‖ú
ã∈ú ‖ ‖ú ‖ã‖û kN
(83)

Remark: The induced norm measures how much matrix n deforms a unit sphere ‖ ‖ú = 1.
Remark: The Euclidean matrix norm:
e e
‖n‖ü = šg g
X
›opj ›
pkN jkN

is not induced by any vector norm, and therefore it is of limited use (nevertheless it is consistent with
the second vector norm, see below)
Definition: The matrix norm ‖ ∙ ‖ù is consistent with the vector norm ‖ ∙ ‖ú if:

20
Advanced Computational Fluid Dynamics Jacek Rokicki

‖n ‖ú ≤ ‖ n ‖ù ∙ ‖ ‖ú (84)

Remark: Every induced norm is consistent as, from the definition of the induced norm:
‖n ‖ú
‖ n ‖ù ≥
‖ ‖ú
The following matrix norms are induced by the vector Hölder norms ‖ ‖):

e
Example:

‖ n ‖• = max g›opj ›
pkN,..,e
(85)
jkN

Proof: see Annex 1

e
Example:

‖ n ‖N = max g›opj ›
jkN,..,e
(86)
pkN

Proof: see Annex 2

‖ n ‖X = max √5
Example:

Ì∈spect ýþ ý
(87)

and if n = nl
‖ n ‖X = max 5 = 5max n
Ì∈spect ý
(88)

Proof: see Annex 3


Remark: Second norm is always the smallest among all induced norms:
‖ n ‖X ≤ ‖ n ‖) (89)

‖ n ‖X ≤ ‖ n ‖ü ≤ √§‖ n ‖X
Remark:
(90)

21
Advanced Computational Fluid Dynamics Jacek Rokicki

10. Iterative methods to solve large linear systems

The Jacobi iterative method


Suppose we have a large linear system, which is impractical or impossible to solve by the finite Gauss
elimination method:

n = with n = &| + - + ' (91)


where:
0 0 ⋯ 0 ∗ 0 ⋯ 0 0 ∗ ⋯ ∗
∗ 0 … 0 0 ∗ … 0 0 0 ⋯ ∗
|=Y Z, -=Y Z, =Y Z
⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ∗
(92)
∗ ∗ ∗ 0 0 0 0 ∗ 0 0 0 0
The original equation can be thus rewritten as:

- = − &| + ' (93)


or alternatively as:
= - =N − - =N &| + ' (94)

The Jacobi iterative method generated by (94) starts with an arbitrary guess (e.g., zero) and
generates the consecutive iterates according to the formula:
B“N
= - =N − - =N &| + ' B
, P = 0, 1, 2, …. (95)
Or in the scalar form, for a fixed value of P:
j=N e
1
= − g opj − g opj % , ² = 1, … , §
B“N B B
j
wjj j j j (96)
pkN pkj“N

We will investigate the sufficient conditions for the convergence of this iterative procedure. Suppose
now that ∗ denotes the exact solution, we have:

∗ = - =N − - =N &| + ' ∗ (97)


We can define now the error of each iteration as:

B
≝ B
− ∗ and ¨
B
= B (98)
Subtracting the formulas (95) and (97) one obtains:

B“N
= −- =N &| + ' B (99)
and a following estimation:
B“N
ú
= −-=N &| + ' B
ú
≤ ‖- =N &| + '‖ù B
ú

¨ B“N
≤ ‖- =N &| + '‖ù ¨ B
(100)

The iterations converge if for some matrix norm ‖ ∙ ‖ù :

22
Advanced Computational Fluid Dynamics Jacek Rokicki

‖-=N &| + '‖ù < 1 (101)


This is the necessary condition for convergence.
Remark: It is sufficient to prove the above for any consistent matrix norm.

Suppose that n Is strongly diagonally dominant:


Example:

p=N e
|opp | > g›opj › + g ›opj ›
jkN jkp“N

p=N e
1
g›opj › + g ›opj ›% < 1
|opp |
(102)
jkN jkp“N

¼ ≝ - =N | +
p=N e
1
‖¼‖• = max g›opj › + g ›opj ›% < 1
p |opp |
jkN jkp“N

Iterative method of Jacobi is therefore always convergent for the strongly diagonally dominant
matrices

If n is weakly diagonally dominant then ‖¼‖• ≤ 1 and the necessary condition is not fulfilled.
Remark:

Possibly ‖¼‖X < 1 (as the second norm is smallest of all), but this is difficult or impossible to prove in
a general case

Let’s consider now a special case of weakly diagonally dominant matrix n, namely the matrix
Example:

− 2 p,j + − 2 p,j +
corresponding to the discrete Poisson operator:

LÖ |•¡ð ,ëÜ• ≡ + =
p=N,j p“N,j p,j=N p,j“N
ℎX ℎX
1
= X À + −4 + + p,j“N Â
(103)

ℎ p,j=N p=N,j p,j p“N,j

With weakly diagonally dominant matrix n


1
n= &… 1 … 1 − 4 1 … 1 … '
ℎX
(104)

¼ = - =N &| + ' = … − … − 0 − … − … = − &nℎX + 4#'


N̂ N̂ N̂ N̂ N̂
(105)

Note that eigenvectors of n and ¼ coincide, therefore the eigenvalues can be easily calculated”
On the other hand ‖¼‖• = 1 which shows that sharper estimation of the matrix norm is essential to
demonstrate convergence of the Jacobi iterative procedure.

23
Advanced Computational Fluid Dynamics Jacek Rokicki

1 ℎX 5)¿ n
5)¿ ¼ = − •ℎX 5)¿ n• + 4 = − +1 =
4 4
"ℎÕ .ℎÕ
= − 01 − sinX Þ ß − sinX Þ ß4
2 2
(106)

", . = 1,2, … , §

max›5),¿ › = ›5N,N › < 1


),¿
(107)

‖¼‖X < 1
One can estimate that (for large values of n):
Õ X ℎX
‖¼‖X = ›5N,N › ≈ 1 −
2
(108)

Therefore the Jacobi iterative method will remain convergent for the particular matrix (73) Błąd! Nie
można odnaleźć źródła odwołania. corresponding to the discrete Poison problem.

One can estimate number of iterations P necessary to lower the solution error by factor of ¨ ≈ 2.71
Remark:


‖¼‖B
X =

P ln‖¼‖X = −1 and with ln 1 − ^ ≈ −^


one obtains:
Õ X ℎX
(109)
P ≈1
2
2 2
P ≈ X X = X §+1 X
Õ ℎ Õ
The number of iterations grows as the square of the number of segments (steps) in one direction.
The total cost of finding the solution is (as matrix-vector multiplication requires §X operations for
dense matrices) therefore proportional to:
Ð § + 1 X §X ~ Чˆ
This is a very expensive method and not at all practical. Krylov methods or multigrid algorithm are
much faster. Nevertheless the method of Jacobi allows for very simple analysis and is the basis of
multigrid approach. The method of Jacobi is easily transferable also to the nonlinear problems which
are of our main interest.

The Gauss-Seidel iterative method


The Jacobi iterative method can be slightly modified (and simplified) by observing that the inner
iterations are performed in a ordered sequence and some vector elements are available already from
the current iteration, this modifies the original Jacobi algorithm (96) to get scalar version:

24
Advanced Computational Fluid Dynamics Jacek Rokicki

j=N e
1
= − g opj − g opj % , ² = 1, … , §
B“N B“N B
j
wjj j j j (110)
pkN pkj“N

Or in the matrix-vector representation:


B“N
= &- + |'=N À − B
Â, P = 0, 1, 2, …. (111)
The analysis of Gauss-Seidel algorithm is more involved than the analysis of Jacobi algorithm,
therefore we will consider only the latter one.

Error analysis of consecutive iterations of the Jacobi iterative algorithm


Suppose we have a matrix corresponding to the 1D Poisson like problem:
−2 1
G M
1 F 1 −2 1 L
ne×e = XF ⋱ ⋱ ⋱ L,

F 1 −2 1 L
(112)

E 1 −2K

1
Convergence of the Jacobi iterative procedure depends on the properties of:

G 0 − M
F 2 L
1 1
F− 0 − L
F 2 2 L 2
¼ = - =N &| + ' = F ⋱ ⋱ ⋱ L , - = − ℎX ø
F 1 1
0 − L
(113)

F 2 2L
F 1 L
E − 0 K
2
Note that eigenevectors of n, - and ¼ are the same, and:
¼ = - =N n − ø (114)
Since the eigenvalues and eigenvectors of n are known:
4 "ℎÕ
5) n = − sinX Þ ß , " = 1,2, … , §
ℎ X 2
(115)

sin Õℎ"1
G M

F L
) = F sin Õℎ"Û L∈ℝ
õ

F ⋯ L
(116)

Esin Õℎ"§ K
The eigenvalues of ¼ can be obtained by simple subtraction:
ℎX "ℎÕ
5) ¼ = − 5) n − 1 = 2 sinX Þ ß − 1 = −cos ÅÕℎ
2 2
(117)

The eigenvalues can be visualised graphically:

25
Advanced Computational Fluid Dynamics Jacek Rokicki

0.5

0 (118)

0.5

H = ÅÕℎ
1
0 1 2 3
x

to demonstrate that low and high frequencies in the error (low and high components of the
eigenvector- Å small and large) are very weakly damped (|5| ≈ 1 ), while middle frequencies are
strongly damped (|5| ≪ 1 ),.

Error analysis of Jacobi iterations with underrelaxation


We will repeat the previous analysis for the Jacobi method with underrelaxation (for the same
matrix).
The simple Jacobi method (94)(95) can be written as:
B“N
= - =N − - =N &| + ' B

And with the aid of underrelaxation as:


B“N
= 1− B
+ À-=N − - =N &| + 'Â B
=
= - =N + Àø − ø + - =N &| + 'Â B
=
(119)

= - =N + &ø − - =N n' B
The convergence of the iterations depends on the eigenvalues of
¼ = ø − - =N n
ℎX "ℎÕ
5) ¼ =1− 5 n = 1 − 2 sinX Þ ß
2 ) 2 (120)
ℎX ℎÕ ℎX Õ X
5N ¼ =1− 5N n = 1 − 2 sinX Þ ß ≈ 1 −
2 2 2
The eigenvalues are visualised for different values of in the Figure below with H = "ℎÕ/2, the
smaller values of the modulus of the eigenvalue corresponding to the faster damping of the
contribution of the corresponding eigenvector.

26
Advanced Computational Fluid Dynamics Jacek Rokicki

0.5

ω=1
0
ω = 2/3
ω = 1/2 (121)
ω = 1/3

0.5

H=
)ÖÚ
1
0 0.5 1 1.5

X
x

"ℎÕ Õ Õ
H=
2 4 2
0 Comments

ω=0
1 2 1
1 1 1 Never convergent

=
3 3 3
1 Broadest spectrum of the damped

1 1
frequencies

=
(122)
2 2
1 0 Very fast damping of high frequencies
2 1 2
=
3 3 3
1 Broad spectrum of the damped
frequencies
=1 1 0 -1 Very good damping of middle

= 1/2 the high frequencies of the error are strongly damped while low
frequencies
It is clear, that for
frequencies are kept almost unchanged. Still slightly better properties can be observed for = 1/3
for which the range of strongly damped frequencies is broader.
The Jacobi iterative procedure with underrelaxation is even slower in convergence than for =0
but the essential property is the smoothing (damping) of high frequencies which forms the basis of
the multigrid method. Similar properties has the Gauss-Seidel iterative method.

27
Advanced Computational Fluid Dynamics Jacek Rokicki

Suppose we have a sequence of meshes covering Ω with stepsizes ℎ, 2ℎ, 4ℎ, … ,etc. and a sequence of
11. Multigrid method

linear systems:
nÖ Ö = ÏÖ (expensive to solve)
nXÖ XÖ = ÏXÖ
nˆÖ = ÏˆÖ (much cheaper to solve)
(123)
ˆÖ

…………………………

How to use this sequence to find the solution of the first, expensive to solve equation, much faster ?

The naïve approach


The naïve approach consists in iterating first on the sparse mesh and then to interpolate the solution
on the denser mesh and iterate in the finer mesh having interpolated solution as an initial guess.
The algorithm may look like below:
( = ℎ ∙ 2t , L denotes number of Levels

≡0

l

for = | − 1 step − 1 until 0 do


• ( =ℎ∙2
• Perform few Jacobi/Gauss-Seidel iterations to smooth out high frequency error in

nl = Ïl starting from the available



l l

• Interpolate l on the finer mesh l/X to obtain the first guess for l/X

end do
This algorithm will work but will not be (if at all) faster, than the Jacobi/Gauss-Seidel original
algorithm.

Full Multigrid Algorithm

Suppose now that, the single level of the Multigrid Algorithm Ö performs the following recursive
action:

Ö ← Ö Ö , ÏÖ

1. Perform few Jacobi/Gauss-Seidel iterations to smooth out high frequency error


in nÖ Ö = ÏÖ starting from the arbitrary Ö

2. If ΩÖ is the coarsest gird, go to 4, else do residual correction: (124)


ÏXÖ ← #ÖXÖ ÏÖ − nÖ Ö

XÖ ←0
XÖ ← XÖ XÖ , ÏXÖ

28
Advanced Computational Fluid Dynamics Jacek Rokicki

3. Correct Ö ← Ö + #XÖ
Ö

4. Perform (optionally) few Jacobi/Gauss-Seidel iterations to smooth out high


frequency error in nÖ Ö = ÏÖ starting from the current Ö

In the above #ÖXÖ denotes the restriction operator, which transfers the grid function Ö from
ΩÖ onto ΩXÖ . The 1D example of such operator is given by:

XÖ ← #ÖXÖ Ö
1
= • Ö,Xp=N + 2 Ö,Xp + Ö,Xp“N •
(125)
XÖ,p
4
As high frequency component of Ö is already smoothed out, this restriction will transfer almost all
information to the coarse grid.
On the other hand #XÖ
Ö
stands for the prolongation operator, which transfers the grid function XÖ

from ΩXÖ onto ΩÖ . The 1D example of such operator is given by:

Ö ← #XÖ
Ö

Ö,Xp = XÖ,p

1
(126)
= • + XÖ,p“X •
Ö,Xp“N
2 XÖ,p

The restriction and prolongation operators (matrices) re as a rule related through the following
requirement:

#XÖ = U ∙ •#ÖXÖ •
Ö !
(127)
where U ∈ ℝ denotes a constant number.
In the example above, these transfer matrices have the following simple form:
1
G M
2
F L
1F1 1 L 1 1 2 1
#XÖ
Ö
= F 2 L #ÖXÖ = 1 2 1
2 4
F 1 1L 1 2 1
(128)

F 2L
E 1K
The multigrid algorithm (124) offers significant acceleration of convergence, in comparison with the
classical iterative schemes.

29
Advanced Computational Fluid Dynamics Jacek Rokicki

12. Matrix functions


Suppose we have an arbitrary scalar function Ï H , be it: H X , sin H , ¨ ¡ , √H, or |H|. As we are
interested in multidimensional problems we would like to define the meaning of such functions
acting on matrices (and perhaps also on other operators).
It is obvious that the naïve element-wise definition:
Ï oNN … Ï oNe
Ï n ≝ … … …
Ï oeN … Ï oee
(129)

cannot be considered, as with this definition nX ≠ n ∙ n.


We will therefore try to propose a more reasonable approach.
1. Suppose now that we start with polynomial function Ï H = UB H B + ⋯ + UN HN + U† . In this
case the definition of Ï n is quite straightforward:
Ï n ≝ UB nB + ⋯ + UN nN + U† # (130)
2. Equally straightforward is the case when the scalar function is defined by the infinite
power/Taylor series:

Ï H = g Uj H j (131)
jk†

As for example is the case for ¨ ¡ = ∑•


¡Ü
jk† j! . Then we can extend this formula to matrices:

Ï n ≝ g Uj n j (132)
jk†

This series will remain convergent provided, that the corresponding scalar series is
convergent:

Ï ‖n‖ù = g Uj ‖n‖ù
j
(133)
jk†

where ‖n‖ù denotes an arbitrary matrix norm.


In particular we can define
n n n
sin n ≝ n − + − +⋯
3! 5! 7!
(134)

and this definition will be consistent with the expected properties of the sin n function, in
particular we will preserve the usual trigonometric identities, e.g., sin 2n = sin n cos n .
3. Suppose now that Ï H is quite arbitrary, and the matrix n is diagonalisable, i.e., n =
½ =N Λ½, where Λ = diag•5j •. Then we can define the matrix function as a scalar function
acting on each eigenvalue separately:

30
Advanced Computational Fluid Dynamics Jacek Rokicki

Ï 5N 0 … 0
0 Ï 5X … 0
Ï n ≝ ½ =N Ï Λ ½ = ½ =N Y Z ½ = ½ =N diag ÀÏ•5) •½
… … ⋱ …
(135)
0 0 … Ï 5e
We will show that this definition is fully consistent with the previous one (via power series). For this
purpose we observe that:
n j = ½ =N Λ½ ∙ ½ =N Λ½ ∙ … ∙ ½ =N Λ½ = ½ =N Λ ½ (136)
Therefore, one obtains the identity:
• • • •

Ï n ≝ g Uj n = g Uj ½ Λ½=½ g Uj Λ % ½ = ½ diag g Uj 5) % ½ =
j =N =N =N j

jk† jk† jk† jk† (137)

= ½ =N diagÀÏ•5) •½
proving that both formulations are fully equivalent.
This approach allows to define nonanalytic functions like √n, or |n| and expect them to behave as
scalar counterparts do. However not all relations known from the scalar algebra are transferred to
the matrix functions (mainly because matrices do not commute n¼ ≠ ¼n). As a consequence of
which, the well-known relation does not hold:
¨ ý“ ≠ ¨ ý ∙ ¨ (138)
unless n and ¼ have the same eigenvectors. However the diagonalisable matrices having the same
eigenvectors do commute as n¼ = ½ =N Λ ½½ =N Λ ½ = ½ =N &Λ Λ '½ = ½ =N &Λ Λ '½ = ¼n.
Similarly do commute all matrices Ï n and n .

Matrix linear ODE


Knowing this we can attempt to solve the multidimensional linear Ordinary Differential Equation
(ODE) in which matrix n = ½ =N Λ½ is diagonalisable:
w
=n
Ñ w , n ∈ ℝe×e , ∈ ℝe
=0 =
(139)

By analogy to the scalar case we will assume that = †¨


ýâ
. To show that this is indeed a correct

solution, it is enough to evaluate the derivative äâ ¨ ýâ , directly from the definition:


ä

w ýâ ¨ý â“
− ¨ ýâ ¨ý − #
¨ ≝ lim = ¨ ýâ ∙ lim =
w →† Q →† Q
nQ nX Q X
# + 1! + 2! + ⋯ − # nQ nX Q X
= ¨ ýâ Y∙ lim Z = ¨ ýâ A ∙ lim # + + … = n¨ ýâ
(140)

→† Q →† 2! 3!

This confirms that the proposed solution indeed fulfils the equation (139).

31
Advanced Computational Fluid Dynamics Jacek Rokicki

13. Nonlinear equations


We will investigate now the possibility to solve the general nonlinear equation

=0 (141)
in which might be a scalar, a vector or a function (or even a vector function), while is either a
scalar function or a vector function or a functional operator respectively.

Problem A
To illustrate the first possibility we consider the simple problem to solve

¨ =¡ = H or ¨ =¡ − H = 0 (142)
where H is a scalar real variable and H ≡ ¨ =¡ − H.

Problem B

propose the system of § equations which can be written using matrix-vector operators
The second option is usually illustrated by a list of algebraic nonlinear equations, here however we

Ën + # ∙ ¨ =¡ ¡ ÍH = , H ∈ ℝe , ∈ ℝe , n ∈ ℝe×e (143)

where # denotes the identity matrix, while H ! H ≡ ‖H‖X is always a positive real number. Here
H ≡ Ën + # ∙ ¨ =¡ ¡ ÍH − ≡ nH + H¨ =¡ ¡
− . The matrix n and the RHS (Right Hand Side)
vector are given.

Problem C
The third option can be illustrated by any nonlinear ODE (Ordinary Differential Equation) or PDE
(Partial Differential Equation), e.g., Euler or Navier-Stokes equations. For simplicity we propose the
1D nonlinear convection-diffusion BVP

wX w
ï + =Ï H
wH X wH
î o =
(144)
ì =
H is an unknown, sufficiently smooth function, Ï H denotes the known function of the
RHS, while o < , ,
where
äã
ä¡
represent the known real numbers. The nonlinearity present in this
equation is similar to the convective nonlinearity of the Euler or Navier-Stokes equations. The
function in this case is

wX w
G + −Ï H M
≡ FwH wH L
X

F o − L
(145)
E − K
In the above, the first equation is nonlinear, while the 2nd and 3rd is linear. The RHS zero consist of
one zero-function and two real zeros.

As all these problems are nonlinear we cannot be sure that the solution always exists and that in
such cases is always unique. Nevertheless we will present few methods that allow to solve such
systems in a systematic manner (if the solution exists and if the corresponding iterative procedure

32
Advanced Computational Fluid Dynamics Jacek Rokicki

converges, which usually is not known a priori). However in some rare cases it is possible to decide
about the existence and the uniqueness of the solution.

There exist many iterative algorithms to solve nonlinear problems, which are applicable under
different conditions. The basic algorithms are:

• Fixed point iterations


• Bisection, which is generally applicable to scalar equations (omitted here)
• Secant method, which can be regarded as quasi-Newton method in which the derivative is
replaced by the finite difference (again omitted)
• Method of frozen coefficients (applicable for some ODE’s and PDE’s)
• Newton method, in which derivative needs to be calculated
• Embedding in a pseudo-time problem (applicable for some ODE’s and PDE’s)

The Method of Frozen Coefficients


This method is purely heuristic and cannot be called “systematic”. It bases on observation that higher
derivatives are somehow “more important” in the equation than the function values itself. Therefore
if the nonlinearity consists of the two, we may take the function values from the previous iteration,
while the derivative from the present one. This is illustrated (for the Problem C) in the flowchart
below:

1. At zero iteration take: Å = 0, and † H ≡ 0,

ïw Ä“N + w Ä
X
2. Solve the linear BVP

í wH X =Ï H
Ä
wH
î Ä“N o =
í
ì Ä“N =
3. Calculate the difference between Ä H and Ä“N H
δ ≔ max › Ä H − Ä“N H ›
¡
4. If δ < Q then STOP (the equation is solved with Q accuracy)
5. Substitute Å ≔ Å + 1 and return to step 2.
From numerical experience we know, that this iterative procedure will converge if the solution is
“small” and will diverge if the solution is “large”, as in this latter case the nonlinear term becomes

low D¨ and will fail for higher D¨ (say for D¨ > 1000).
more important than the second derivative. In Fluid Mechanics this technique will be successful for

The method of frozen coefficients is very simple and follows the engineering intuition which allows to
solve the linear problem first and include nonlinearities at the later stage (e.g., when we solve the
fluid flow or heat transfer problems we neglect in the first approximation the dependence of
viscosity and thermal conductivity on temperature). Although simple, the procedure is quite arbitrary
with respect to the choice of the element of the equation, that should be taken from the previous
iterations. As a result the method will fail for more complex nonlinearities, such as

wX w
+Þ ß =Ï H
wH X wH
(146)

In this case we have no indication how to proceed with the 3rd power of the first derivative.

33
Advanced Computational Fluid Dynamics Jacek Rokicki

Newton Method (Quasi-Linearisation)

expansion of Ï H∗ around certain point H


The Newton method is best presented (in scalar case, i.e., for Problem A) by considering the Taylor

Ï H∗ = Ï H + Ï H H∗ − H + ⋯ (147)
By rejection of higher-order terms and by assuming that H∗ denotes the sought solution (i.e.,
Ï H∗ ≡ 0), we obtain the linear equation for H∗
Ï H H∗ − H = −Ï H (148)
Since this only a rough estimation, we replace H by the current approximation of the solution HÄ ,
while H∗ is replaced by the next approximation HÄ“N
Ï HÄ HÄ“N − HÄ = −Ï HÄ (149)
And finally the Newton’s formula is obtained:
Ï HÄ
HÄ“N = HÄ −
Ï′ HÄ
(150)

If we apply it to the Problem A (Ï H ≡ exp −H − H) we get


¨ =¡ − HÄ
HÄ“N = HÄ +
¨ =¡ + 1
(151)

The consecutive iterations (starting with H† = 1) will deliver:


Å HÄ Error = Ï HÄ
0 1 -0.63
1 0.53 0.046
2 0.5669 0.00024494
3 0.567143285 6.92x10-9
4 0.567143290409783869 5.54x10-18
5 0.56714329040978387299996866221035554749 3.54x10-36
(note that 35 digits are accurate in the last iteration; this was possible because these computations
were carried out using 120 digits of accuracy)
The Newton’s method is very fast, if started from the close neighbourhood of the root. It usually fails
if the first approximation is far from the root. It may also fail if the first derivative vanishes around
the root.
The same approach can be used in the vector case again by considering the Taylor expansion
Ï H∗ = Ï H + Ï H H∗ − H + ⋯
ÏN ÏN
HN H∗N G … M
ÏN HN , … , He Ï F HN He L
H≡ ⋮ , H∗ ≡ ⋮ ,Ï H ≡ ⋮ , Ï H ≡ ≡ ⋯ ⋱ ⋯L
H F
(152)
He H∗e Ïe HN , … , He F Ïe Ïe L

E HN He K
Replacing H∗ by H Ä“N and H by H Ä
Δ ≡ H Ä“N − H Ä
we obtain the linear equation system for the current correction

Ï •H Ä • Δ = −Ï•H Ä • (153)
After this equation is solved, we obtain the next approximation by the formula H Ä“N =H Ä + Δ.

34
Advanced Computational Fluid Dynamics Jacek Rokicki

In the case of the functional equation (Problem C) we first must understand what is the derivative of
a functional operator with respect to (in this case it will be an analogue of the directional
derivative). Such analogue is called Gatteaux derivative and is defined as
+Q −
〈- , 〉 ≝ lim
→† Q
(154)

additionally noted that the above formula is linear with respect .


where the left hand side should be read as derivative of at in the direction of . It should be

The Gatteaux derivative differs from the usual derivative (including the directional derivative) by
incorporating the directional vector into the formula. This is illustrated in the table below
Gatteaux derivative Gatteaux derivative
〈- , 〉 〈- , 〉
Case Linear function

:ℝ → ℝ w
⋅ H =U⋅H U⋅
H, , ∈ℝ wH ¡kã

: ℝe → ℝe
0 4⋅ H =n⋅H n⋅
H, , ∈ ℝe H ¡kã

where U and n denote constant scalar and matrix respectively ( / H stands for the Jacobian
matrix).
The corresponding Taylor formula for a functional case has a form

∗ = + 〈- , ∗ − 〉+⋯ (155)
out of which, after rejection of higher order terms and by assuming that ∗ is a solution, the linear
problem is obtained
〈- , ∗ − 〉=− (156)
Replacing ∗ by Ä“N Ä
Δ ≡ Ä“N − Ä
and by we obtain the linear problem for the current correction

〈- • Ä •, Δ〉 = − • Ä • (157)
By solving this problem we obtain Δ and subsequently Ä“N ≔ Ä +Δ

The algorithm will be presented for Problem 3, for which


′′ + ′ − Ï
≡ o − %

(158)

where for simplicity derivatives were replaced by apostrophes.


The Gatteaux derivative is calculated from its definition

35
Advanced Computational Fluid Dynamics Jacek Rokicki

+Q + +Q +Q − Ï− + −Ï
1
〈- , 〉 ≝ lim +Q o − − o − %=
→† Q
+Q − − −

1 Q& + + ′' + Q X ′ + + ′
(159)

= lim Q o %= o
→† Q
Q
The final algorithm to solve the nonlinear Problem C by the Newton’s (Quasi-linearization) method is
therefore

1. At zero iteration take: Å = 0, and † H ≡ 0,

w Ä wX Ä w Ä
ïw Δ + Ä wΔ +
X
2. Solve the linear BVP

íwH X Δ = − S + − ÏT
wH wH wH X Ä
wH
î Δ o =− Ä o −
í =− Ä −
ì Δ
3. Calculate the norm of the increment Δ
δ ≔ max |Δ H |
¡
4. Calculate the next iteration Ä“N H = Ä H + Δ H
5. If δ < Q then STOP (the equation is solved with Q accuracy)
6. Substitute Å ≔ Å + 1 and return to step 2.

As noted earlier the Gatteaux derivative of a linear operator ≔" ≡ " has a very simple
form
" +Q −" " + Q" − "
〈-" , 〉 ≝ lim = lim ="
→† Q →† Q
(160)

Exercise:

äã X
Present the Newton algorithm for the Problem B and for the Problem C with the differential equation
(146) and with the boundary condition o + 2 Ëä¡ Í o = N and = .

Exercise
Present the Newton Algorithm for the following BVP
X X
+ 1+ X
= 1
Ñ H X I X H, I ∈ Ω = 〈0,1〉 × 〈0,1〉
|êu =0
(161)

36
Advanced Computational Fluid Dynamics Jacek Rokicki

14. Model scalar equations in 1D


The model equations described below are important to understand the behaviour of more complex
PDE’s (also nonlinear), but they also play a role in understanding certain behaviour of the
discretisation schemes.

Advection equation (model hyperbolic equation)


The linear advection equation describes the simplest process of transport of the passive scalar. This
equation involves the evolution in time and space and can be expressed as:

+U =0
Ñ H +U ¡ =0
or with an alternative notation as

H, = 0 = Ï H
â
#
H, 0 = Ï H
(162)

where U is a constant value. The general solution to this Initial Value Problem (IVP) is given by:
H, =Ï H−U (163)
This is easily verified as â = −UÏ′, while ¡ = Ï′ (therefore â +U ¡ = 0).
Now it is clear that the solution of the advection equation is a function which moves to the right (if
U > 0) with a constant speed U, while the shape of the function does not change. This is illustrated in
the Figure below, which presents the solution at = 0, 1, 2 U = 1.5 .
1

0.8

0.6

t=0
t=1
0.4
t=2

0.2

2 0 2 4 6
x

It is also beneficial to perform complex Fourier analysis of this equation, and analyse the method
which will be useful in the next cases. For this purpose we assume that we have complex valued
function and the complex valued initial condition (in the form of complex Fourier mode ¨ pÄ¡ =
cos ÅH + Û sin ÅH, where Å > 0 denotes the wave number):
â+U ¡ =0
#
H, 0 = ¨ pÄ¡
(164)

The solution we seek in the form of:


H, = ¨p Ä¡= â (165)
where is an unknown coefficient (angular frequency), which has to be determined.
We have:

37
Advanced Computational Fluid Dynamics Jacek Rokicki

¡ = ÛŨ p Ä¡= â
= −ÛÅ â = −Û ¨ p Ä¡= â
= −Û (166)
As a result the Partial Differential Equation (PDE) (164) is replaced by a simple algebraic equation
allowing to determine :
−Û + ÛÅU = 0 ⟹ = ÅU (167)
The final solution is expressed as:
H, = ¨ pÄ ¡= â (168)
which perfectly agrees with the previous expression for the solution.
The most important properties of this exact solutions are:
• The amplitude of the solution does not change
• All waves move with the same speed U (irrespectively of the value of Å).

Diffusion equation (model parabolic equation)


The diffusion PDE is a second order equation and describes phenomena related to viscous molecular
or thermal diffusion:
X
=$
Ñ HX â = $ ¡¡
or with alternative notation as

H, = 0 = ¨ pÄ¡ #
H, 0 = ¨ pÄ¡
(169)

In this case we are unable to provide a simple solution for the initial condition Ï H , and only the
Fourier mode initial condition is considered. In this case, assuming that the solution is H, =
¨p Ä¡= â
, we have again:

¡¡ = −Å X ¨ p Ä¡= â
= −Å X â = −Û ¨ p Ä¡= â
= −Û (170)
which allows to determine and the solution as:
−Û = −$Å X ⟹ = −Û$Å X
H, = ¨= Ä?â
¨ pÄ¡
(171)

The first exponential term plays a role of the decreasing (in time) amplitude of the initial Fourier
mode.
The most important properties of this exact solutions are:
• The amplitude of the solution decreases ($ is always positive), decreases the faster the
higher wave number is taken.
• The form of the initial Fourier mode is preserved.
These features of the solution are presented in the Figure below for the three consecutive moments
of time:

38
Advanced Computational Fluid Dynamics Jacek Rokicki

0.5

t=0
t=1 0
1 2 3 4 5
t=2 x

0.5

Advection-Diffusion equation
The advection-diffusion PDE is a second order equation and describes phenomena related to the
transport in conjunction with viscous, molecular or thermal diffusion:
X
+U =$
Ñ H HX + U ¡ = $ ¡¡
or with alternative notation as
â
H, = 0 = ¨ pÄ¡ #
H, 0 = ¨ pÄ¡
(172)

In this case we are again unable to provide a simple solution for the initial condition Ï H , and only
the Fourier mode initial condition is considered. In this case, assuming that the solution is H, =
¨p Ä¡= â
, we have again:

¡¡ = −Å X ¨ p Ä¡= â
= −Å X â = −Û ¨ p Ä¡= â
= −Û
= ÛŨ p Ä¡= â
= ÛÅ
(173)
¡

which allows to determine and the solution as:


−Û + ÛÅU = −$Å X ⟹ = ÅU−Û$Å X
H, = ¨= Ä?â
¨ pÄ ¡= â
(174)

The most important properties of this exact solutions are:


• The amplitude of the solution decreases ($ is always positive), decreases the faster the
higher wave number is taken.
• The initial Fourier mode travels with the speed U, but otherwise its form does not change.

Telegraph equation
The telegraph PDE is a third-order equation describing the wave propagation in the long telegraph
lines:

+U = −5
Ñ H H + U ¡ = −5 ¡¡¡
or with alternative notation as
â
H, = 0 = ¨ pÄ¡ #
H, 0 = ¨ pÄ¡
(175)

39
Advanced Computational Fluid Dynamics Jacek Rokicki

In this case we are again unable to provide a simple solution for the initial condition Ï H , and only
the Fourier mode initial condition is considered (indeed this is of interest for telegraph lines in which
propagation of waves are of main interest). In this case, assuming that the solution is H, =
¨p Ä¡= â
, we have again:

¡¡¡ = −ÛÅ ¨ p Ä¡= â


= −ÛÅ â = −Û ¨ p Ä¡= â
= −Û
= ÛŨ p Ä¡= â
= ÛÅ
(176)
¡

which allows to determine and the solution as:


−Û + ÛÅU = Û5Å ⟹ = ÅU−Û5Å = Å U − 5Å X
H, = ¨ pÄÀ¡=• =ÌÄ ? •âÂ
(177)

In the above % Å = U − 5Å X plays a role of the speed of wave propagation.


The most important properties of this exact solutions are:
• The amplitude of the solution does not change.
• The long waves 5Å X ≪ U travel with the speed U, the shorter waves (Å larger) depending on
the sign of 5 propagate faster (if 5 < 0) or slower (if 5 > 0).
The fact that the speed of propagation depends on the wave number is called a wave dispersion and
this phenomenon constitutes the main reason of distortion of the signals in the telegraph
(telephone) lines.

40
Advanced Computational Fluid Dynamics Jacek Rokicki

15. Multidimensional first order PDE


In analogy to the scalar equation (164) we will consider now the multidimensional case:
with
ÏN oNN oNX … oNe
G…M
N
+n =0 G…M
Ñ H F L F L oXN oXX … … (178)
H, = 0 = Ï H = F ) L , Ï = FÏ) L n = Y … … ⋱ … Z
F … L F…L oeN … … oee
E eK EÏe K
However, as it will become evident through inspection of selected examples, this equation goes far
beyond simple advection (or indeed a hyperbolic problem). In fact a very broad class of linear PDE’s
in , H can be expressed in the form (178).

Example 1
Consider now the second-order wave equation:
X X
−U X
=0
X HX
(179)

H,
H, =0 N
4, where:
X H,
and a new vector variable

H, = − =0
N X
N
H ⇒ H
H, = − UX =0
X N
(180)
X
H
The resulting system has the form:
0 −1 0 −1
+ = 0, n =
−U X 0 H −U X 0
(181)

The characteristic polynomial of the matrix n is 5 = 5X − U X , thus the two eigenvalues are real
and equal 5 = ±U

Consider now the second-order Laplace equation (written for variables and H):
Example 2

X X
+ =0
X HX
(182)

H,
H, =0 N
4, where:
X H,
and a new vector variable

H, = − =0
N X
N
H ⇒ H
H, = + =0
X N
(183)
X
H
The resulting system has the form:
0 −1 0 −1
+ = 0, n =
1 0 H 1 0
(184)

41
Advanced Computational Fluid Dynamics Jacek Rokicki

The characteristic polynomial of the matrix n is 5 = 5X + 1, thus the two eigenvalues are
imaginary and equal 5 = ±Û
Thus not all systems (178) have the same properties, and therefore are only partially of our interest.
We are here interested almost exlusively in the so called hyperbolic systems:

We will call the system hyperbolic (178) if and only if the matrix ne×e is diagonalisable and all the
Definition:

eigenvalues are real. In such case the right eigenvectors ¹ N ,¹ X ,…,¹ e form the basis in ℝe , and:
n = DΛD=N
where
D = À¹ ,¹ ,…,¹ Â, n¹) = 5) ¹)
(185)
N X e

Λ = DAD =N and nD = DΛ
Now we are ready to present exact solution of the hyperbolic IVP problem (178), (185). For this
purpose we left multiply (178) by a constant matrix D =N , to obtain the equation:
D =N D =N
+ D =N nD =0
Ñ H
D =N
H, = 0 = D =N Ï H
(186)

in which a new variable = D =N is introduced, to deliver a system of uncoupled equations:


= 0, = D
Ñ H
H, = 0 = D =N Ï H = H
(187)

as a result we obtain a system of decoupled scalar advection equations:

+ λN =0
N N
Ñ H
N H, = 0 = N H
… (188)

+ λ( =0
e e
Ñ H
e H, = 0 = e H
which we can solve in the finite form:

N H, = N H − 5N
X H, = X H − 5X

(189)

e H, = e H − 5e
to finally get the solution in original variables:
N H − 5N
H − 5X
H, = DY X
Z

(190)
e H − 5e

42
Advanced Computational Fluid Dynamics Jacek Rokicki

Example 3
Consider now again the example of the IVP for second-order wave equation (the initial value is
prescribed for the function itself as well as for the time derivative, as the equation is of second order
with respect to time ):
X X
ï − UX =0
í X HX
H, = 0 = o H
î
í
(191)

ì = â H, = 0 = 0

H,
where o H denotes a prescribed known function. The new vector variable H, =0 N
4, is
X H,
defined as:

H, = ≡ − UX =0
N X
N â
⇒ H
H, = ≡ − + =0
X N
(192)
X
H ¡
H
0
H, = 0 = =0 4
â†
¡† o¡
(193)

The resulting system has the form:


0 −1 0 −1
+ = 0, n =
−U X 0 H −U X 0
(194)

The characteristic polynomial of the matrix n is 5 = 5X − U X , thus the two eigenvalues are real
and equal 5N,X = ±U. The eigenvectors ¹ N , ¹ X and the D and D =N matrices are
1 1 1 1 1 U −1
¹ = , ¹X = , D = , D =N =
N
−U U −U U 2U −U 1
(195)

The transformed variables are:


H, = D =N H, , H, = D H,
1 U −1 0 1 −o¡
H ≡ H, = 0 = D =N Ï H = 0 4=
2U −U 1 o¡ 2U o¡
(196)

1 −o¡ H − U
The solution of (194) can be therefore expressed as:

H, = 0 4
2U o¡ H + U
(197)

H, = D H,
1 1 1 −o¡ H − U + o¡ H + U
Now the original solution is:
1 −o¡ H − U
H, = 0 4= 0 4
2U −U U o¡ H + U 2U U&o¡ H − U + o¡ H + U '
(198)

H,
1
The solution of the wave-equation (191) is therefore:

H, =v wH = v H, wH = v &o¡ H − U + o¡ H + U 'wH =
¡ X
2
1
= &o H − U +o H+U '
(199)

43
Advanced Computational Fluid Dynamics Jacek Rokicki

The exact solution presented in this is Section is of some theoretical interest, helping to understand
the structure of the solution of the multidimensional linear hyperbolic problems. However it is not
very useful for solving numerically the nonlinear Euler equations. Before we tackle nonlinear
problems, we have to recall the discretisation schemes for the linear scalar advection type problems
as well for the linear multidimensional hyperbolic problems.

Example 4
Suppose now, that we have the following initial value problem
1 −3
+ =0
× −2 2 H
0
H, = 0 = 0 4
(200)
sin H

1 −3 3 2
Analysing the matrix for eigenvalues we obtain

n= , 5N = −1, 5X = 4, ¹ N = , ¹ =
−2 2 2 N
−2
3 2 1 2 2
½= , ½ =N =
(201)

2 −2 10 2 −3
= ½ =N is
1 2sin H
Therefore the initial condition for

H, = 0 = ½ =N H, = 0 = 0 4
10 −3sin H
(202)

H,
1 2sin H + 1
and the solution can be expressed as:

H, = 0 4
10 −3sin H − 4
(203)

H,
1 6 sin H +
Returning to the original unknown function we finally get the exact analytical solution
− 6 sin H − 4
H, =½⋅ H, = 0 4
10 4sin H + + 6sin H − 4
(204)

It can be easily verified that both the equation and the initial condition are fulfilled.

44
Advanced Computational Fluid Dynamics Jacek Rokicki

16. Discretisation of the scalar advection equation


We will recall here basic facts concerning the discretisation of the scalar advection equation, known
from the earlier basic Computational Fluid Dynamics course:

+U =0 +U =0
H
â ¡ (205)

will be discretised on the space-time domain (see the Figure below).


)
j = •Hj , )• = ²ℎ, "Δ ² = 0, ±1, ±2, … . " = 1,2,3, … (206)
The analysis of the discretisation formulas is based on the Lax theorem which states the that the
convergence of finite difference discretisation to the exact solution is subject to two conditions:

• Consistency (the finite difference formula should properly approximate the differential
formula)
• Stability (the numerical solution should not blow up in time)

Finite difference formulas


The basic finite difference formulas will be now recalled (see the online book Computational Fluid
Dynamics by the present author), as they are used in further considerations. These formula were
derived by the technique based on the Taylor expansion.

Derivative (A) Finite Difference formula (B) Error term (B-A) Name

j“N − j ℎ ℎ
+ +⋯
¡j
ℎ 2 ¡¡j
6 ¡¡¡j
One sided


formula
j“N j=N ℎX ℎˆ
2ℎ ¡¡¡j + +⋯
6 120
¡j ¡¡¡¡¡j
Central
difference
−2 j+ ℎX ℎˆ
¡¡¡¡j + +⋯
j“N j=N
ℎX
¡¡j
12 360 ¡¡¡¡¡¡j
---
Table 1, Finite difference formulas

These formulas are presented for the space derivatives ¡ and ¡¡ , but the same formulas hold also
for the time derivatives.

Explicit Euler Formula


The simplest possible discretisation of the advection equation with a forward time finite difference
and with the central spatial finite difference formula:

− −
)“N ) ) )
+U =0
j j j“N j=N
The discretisation error is proportional to
Δ 2ℎ ) Δ + ) ℎX
(207)

)“N
allows to evaluate new value of the solution at the next time level j . This formula although
straightforward and simple is numerically unusable being unconditionally unstable (the numerical
solutions blows up in time – despite the fact the exact solution is fully bounded).

45
Advanced Computational Fluid Dynamics Jacek Rokicki

Explicit one sided formula


Interestingly less accurate discretisation with forward time finite difference and with the one sided
spatial finite difference formula has much better properties:

− −
)“N ) ) )
+U =0
j j j j=N
The discretisation error is proportional to
Δ ℎ ) Δ +) ℎ
(208)


Despite being less accurate this discretisation formula is conditionally stable provided:

U > 0 o§w Δ ≤
U
(209)

The Δ ≤
Ö
condition is a typical Courant-Friedrichs-Levy condition (CFL) expressing the physical

requirement that in one time step Δ the grid information cannot travel more than one computational
cell ℎ.
The symmetric formula:

− −
)“N ) ) )
+U =0
j j j“N j
The discretisation error is proportional to
Δ ℎ ) Δ +) ℎ
(210)

Is in turn stable for:



U < 0 o§w Δ ≤
|U|
(211)

This two formulas give rise to the slightly artificial in this context upwind formulation, characterised
by the same discretisation error, but valid for all values of U:
− − −
)“N ) ) ) ) )
+ U“ U= =0
j j j j=N j“N j
Δ ℎ ℎ
U“ = max U, 0 , U= = min U, 0
(212)

This upwind formula has many far reaching generalisations for nonlinear and multidimensional
problems. This formula is no longer linear as both U“ and U= are in principle nonlinear functions (this
will become fully clear for multidimensional as well nonlinear problems).

The explicit Lax-Friedrichs formula


)
The explicit Euler formula can be improved, and made stable by replacing the j in the time

derivative, by the spatial average of the solution at the previous time step:
+
) )
− −
)“N j“N j=N ) )
j 2 +U
j“N j=N
=0
The discretisation error is proportional to

Δ 2ℎ ) Δ +) ℎ
(213)

with the same CFL condition for stability:



Δ≤
|U|
(214)

It is also interesting to notice that the Lax-Friedrichs formula can be rewritten as explicit Euler
formula with additional term on the right hand side:

46
Advanced Computational Fluid Dynamics Jacek Rokicki

− − ℎX −2 +
)“N ) ) ) ) ) )
+U =
j j j“N j=N j=N j j“N
* +
Δ 2ℎ 2Δ ℎX
(215)

Which forms a valid discretisation of the advection equation, but at the same time looks as a second
order spatial discretisation of the advection-diffusion equation:
ℎX
â+U =Q ¡¡ where Q =
¡

(216)

This modification can thus be understood as supplementing the original advection equation by a
term of artificial viscosity (artificially added to stabilise the numerical system).

Implicit formulas
All formulas above can be made implicit in time, which generally makes them unconditionally stable,
but at the very high numerical cost as implicit formulations require solving the large linear systems to
obtain the numerical solution.
The examples for the Euler formula and for the upwind formulas are presented below:

− −
)“N ) )“N )“N
+U =0
j j j“N j=N
The discretisation error is proportional to
Δ 2ℎ ) Δ + ) ℎX
(217)

− − −
)“N ) )“N )“N )“N )“N
+ U“ + U= =0
j j j j=N j“N j
Δ ℎ ℎ
U“ = max U, 0 , U= = min U, 0
(218)

The earlier formulations can be improved in time-discretisation accuracy to deliver ) ΔX + ) ℎX ,


Lax-Wendroff formula

by extending the concept of artificial viscosity presented earlier. The Lax-Wendroff formulation can
be expressed in the following form:
− − UXΔ −2 +
)“N ) ) ) ) ) )
+U =
j j j“N j=N j=N j j“N
* +
Δ 2ℎ 2 ℎX
(219)

which is table for Δ ≤ |


Ö
|

The analogous one side second order formulation ) ΔX + ) ℎX can be expressed by:
Beam-Warming formula

− 3 −4 + UXΔ −2 +
)“N ) ) ) ) ) ) )
+U =
j j j j=N j=X j j=N j=X
* +
Δ 2ℎ 2 ℎX
(220)

which is stable for U > 0 and Δ ≤ |



|

The analogous symmetric formula can be shown to be stable for U < 0.

47
Advanced Computational Fluid Dynamics Jacek Rokicki

Despite improved accuracy the higher-order formulas of Lax-Wendroff and Beam-Warming type are
only of limited further interest in the context of simulation of compressible flows. In such flows
discontinuities appear as rule, and in such cases other properties (monotonicity) are much more
important than the formal accuracy of the scheme.

48
Advanced Computational Fluid Dynamics Jacek Rokicki

17. Discretisation of the multidimensional hyperbolic equation


We will attempt now to extend the scalar formulas of previous Section to the multidimensional
hyperbolic case:
ÏN oNN oNX … oNe
+n =0 G…M
N
G…M
H F L F L oXN oXX … …
= F ) L , Ï = FÏ) L n = Y … … ⋱ … Z (221)
F … L F…L oeN … … oee
E eK
or

â +n ¡ =0 EÏe K
i.e., in which the matrix n is diagonalisable (the eigenvectors form the basis in ℝe ), and the
corresponding eigenvalues are real:
n = DΛD=N
where
D = À¹ ,¹ ,…,¹ Â, n¹ ) = 5) ¹ )
(222)
N X e

Λ = DAD =N and nD = DΛ
As demonstrated by (187) the system can be expressed in the decoupled form ( = D =N ), in which
a system of decoupled scalar advection equations is obtained:

+ λN =0
N N
H
… (223)

+ λ( =0
e e
H
This scalar system can be discretised, e.g., by the Lax-Friedrichs scheme:
+
) )

Nj − −
)“N Nj“N Nj=N ) )
2 + λN
Nj“N Nj=N
=0
Δ 2ℎ
… (224)
+
) )

ej − −
)“N ej“N ej=N ) )
2 + λ(
ej“N ej=N
=0
Δ 2ℎ
and expressed in the following vector form:
+
) )
− −
)“N j“N j=N ) )
j 2 +Λ
j“N j=N
=0
Δ 2ℎ
(225)

Multiplying by D from the left side we obtain the scheme formulated for the original variables:
+
) )
− −
)“N j“N j=N ) )
j 2 +A
j“N j=N
=0
Δ 2ℎ
(226)

It is clear from the above that discretisation formulas do not change for vector systems of first order-
equations (as both the equations and the formulas are linear). However for the vector upwind case,
the formulas become more interesting:

49
Advanced Computational Fluid Dynamics Jacek Rokicki

Nj − − −
)“N ) ) ) ) )
+ λN“ + 5N= =0
Nj Nj Nj=N Nj“N Nj
Δ ℎ ℎ
… (227)

ej − − −
)“N ) ) ) ) )
+ λ(“ + 5e= =0
ej ej ej=N ej“N ej
Δ ℎ ℎ
Again multiplying by D from the left side we obtain the scheme formulated for the original variables:
− j − −
)“N ) ) ) ) )
+ A“ + A= =0
j j j=N j“N j
Δ 2ℎ 2ℎ
(228)

where:
n“ = DΛ “ D =N , n= = DΛ = D=N , n = n“ + n= (229)
The matrices n“ and n= filter the positive and negative eigenvalues of the matrix A, allowing for the


stable discretisation. The CFL condition in this case has the following form:

Δ≤
max›5j › (230)
j

This upwind method is no longer linear as it contain switching function between the sign of the
eigenvalues. Basing on this nonlinear switching it is possible to improve the accuracy of the formula,
circumventing the Godunov barrier.

50
Advanced Computational Fluid Dynamics Jacek Rokicki

18. Nonlinear hyperbolic equations


As presented in Chapter 4 the time dependent Euler equations have a form

+ = 0, = = P ∈ℝ
H Q
(231)

in which , , stand for density, velocity and total energy per unit mass respectively. The scalar
model equation (nonlinear advection equation) that will be considered and analysed is now:

+ Ï = 0 or + &Ï '¡ = 0
H â (232)

where H, is a scalar solution, while Ï is a known nonlinear function.


The simplest equation of this kind is the Burgers equation, for which Ï = X
/2:
X
+ S T=0
H 2
(233)

With the initial condition:


H, = 0 = † H (234)

quantity represented by , function Ï


Equation (232) is presented in the so called “conservative form” as it describes the conservation of
represents flux of this quantity. This equation can be

to H
transformed into the more familiar quasilinear form, by executing the differentiation with respect

Ï
+ = 0
H

Ï
or (235)

+o = 0, o ≝
H
This equation can be solved by the method of characteristics, which will be presented here in general
and for three examples of increasing complexity.

The method of characteristics


The method of characteristics will be presented here for slightly more general equation than (235)
but with the usual initial condition

+o ,H =0
Ñ H
H, = 0 = H
(236)

Let now H = H∗ to denote an arbitrary curve in a , H plane. On this line the solution H, is
equal to

∗ ≝ H∗ , (237)

w ∗ wH∗
We are looking for the (family) of lines on which the solution is constant, i.e.,

= 0 ⟺ + ∙ =0
w w H
(238)

51
Advanced Computational Fluid Dynamics Jacek Rokicki

This equation is identical with the nonlinear advection equation (236) provided the following
ordinary differential equation is fulfilled
wH∗
= o ∗ , H∗
Ñw
H∗ = 0 = H†
(239)

We do not know the solution H, a priori, however we know that on the characteristic line H∗

∗ = U¸§% ≡ H† , = 0 . The latter value is known and therefore


wH∗
= o H† , H∗
Ñw
H∗ = 0 = H†
(240)

If we know the analytic solution to this equation, we obtain the family of curves parametrised with
the value of H† . In the following we will use the method of characteristics to solve the equation (236)
graphically.

Linear equation with constant coefficient - a(u, x)=c


We know already the solution of this linear advection equation
H, = † H−U (241)
This means that on the straight lines H = H† + U the H, plane) the solution is always constant
(these lines are the characteristics). The characteristics can be alternatively found as a solution to the
ODE (240)
wH∗
=U
Ñ w ⟹ H∗ = H† + U
H∗ = 0 = H†
(242)

Let assume now that † H in the initial condition has the form
0 for |H| > 1
† H = - H + 1 for − 1 < H < 0
1 − H for 0 < H < 1
(243)

We seek now the form of the solution at time = N. The geometric construction is based on the
analysis of time evolution of the selected characteristic points of the function † H (on the H-axis)
n, ¼, Ð, which move to different locations n , ¼ , Ð (see next Fig.).

52
Advanced Computational Fluid Dynamics Jacek Rokicki

A B C A’ B’ C’
x

We will consider the case when o , H = H with a previous initial condition H . We will seek the
Linear equation with variable coefficient - a(u, x)=x

solution at time = N. In this case the IVP has the following form

+H =0
Ñ H
H, = 0 = † H
(1)

and therefore the equation for characteristics and the family of curves are
wH∗
= H∗
Ñ w ⟹ H∗ = H† ¨ â
H∗ = 0 = H†
(2)

53
Advanced Computational Fluid Dynamics Jacek Rokicki

A’ A B=B’ C C’
x

The curvilinear characteristics are presented in the figure above as broken blue lines. The initial
condition is denoted by the red solid line, while the solution at time = 1 by the dashed green line.

Nonlinear equation - a(u, x)=u


This case corresponds to the Burgers equation

+ =0
Ñ H
H, = 0 = H
(244)

This equation contains significant nonlinearity, which also can be found in the Euler and Navier-
Stokes equations. The solution is sought for times N and X.

The equation for the family of characteristics is:


wH∗
= H†
Ñ w ⟹ H∗ = H† + H†
H∗ = 0 = H†
(245)

Again the characteristics are the straight lines, with inclination depending on the initial value of the
solution.

54
Advanced Computational Fluid Dynamics Jacek Rokicki

The analysis of the results shows (in this particular case) that above = ∗ ≡ 1 the characteristics
start to overlap, which would mean that for a fixed argument the solution has two different values
(as the characteristics carry different values of solution). Thus we have to conclude that above ∗ the
method of characteristics no longer can be used to predict the solution. Nevertheless we can expect
that for = ∗ the discontinuity appears in the solution.
This is a typical feature of the nonlinear hyperbolic equations. Even if the initial condition is
continuous and regular (smooth) the solution remains smooth only for a finite time. Therefore each
numerical scheme used to solve such equations must be able to cope with discontinuities.
It should also be investigated how these discontinuities evolve in time (how fast they propagate, are
all discontinuities stable/permanent).
The discontinuities of Burgers equation correspond to similar features in Fluid Mechanics, i.e.,
shockwaves and contact discontinuities.

Weak solutions of the nonlinear hyperbolic equations


If as explained earlier the solution to the equation

+ Ï = 0 or + &Ï '¡ = 0
H â (246)

has discontinuities, we must understand what does this mean for the original problem (around
discontinuity both derivatives â and ¡ do not exist).
For this purpose we assume, that the so-called test function Φ H, ∈ ІN ℝ × ℝ“ , i.e., that this
function is continuously differentiable and vanishes at infinity (both in space and time).
We multiply now (246) by Φ H, and integrate it over half-space 〈−∞, ∞〉 × 〈0, ∞〉

55
Advanced Computational Fluid Dynamics Jacek Rokicki

• •
0=v v â + &Ï '¡ ∙ Φ w wH
=• †
(247)

After integration by parts we obtain:


• •
v â ∙ Φ w = − v ∙ Φ0 w − H, 0 ∙ Φ H, 0
† †
• •
v &Ï '¡ ∙ Φ wH = − v Ï ∙ Φ1 wH
(248)

=• =•

And therefore the whole equation is:


• • •
0 = − v v Φâ + Φ¡ Ï w wH + v u x, 0 ∙ Φ H, 0 wH
=• † =•
(249)

This equation is equivalent to (246) for differentiable H, ,but admits also discontinuous solutions
as it contains no derivatives of H, . This equation is called a weak version of (246).
A function H, is called a weak solution of (246) if it fulfils the equation (249) for every function
Φ H, ∈ ІN ℝ × ℝ“ . It has to be stressed that weak solutions may not be unique, and additional
conditions are needed to select the single correct solution. In fluid mechanics these conditions are
based on the entropy condition (2nd law of thermodynamics).
Now we will try to investigate the evolution of discontinuities and their stability.

Propagation of discontinuities (scalar case)


To investigate how the discontinuities propagate, we shall consider the following Riemann initial
value problem:

+ Ï = 0
× H
t for H <0
H, = 0 = #
(250)

2 for H ≥0
Here discontinuity is present already in the initial condition and therefore the solution is understood
in the sense of the weak formulation (249).
We will seek the solution H, in the form:
t for H <%
H, =#
2 for H ≥%
(251)

which means that the discontinuity moves without changing its intensity to the right with the speed %
(if % > 0). This speed % is unknown and has to be determined.
One should note that H, = U¸§% is an evident solution of(246) and therefore the assumption
that the discontinuity moves without changing shape and with a constant speed seems natural. We
will see further on, that this is not always the case.

56
Advanced Computational Fluid Dynamics Jacek Rokicki

s∙t

x
Let now (251) be a solution (250) and let be a large number. To determine the speed % we
calculate now the definite integral of â
ù ù
v â H, wH = − v &Ï '¡ wH = Ï• − , • − Ï• , •=Ï t −Ï 2
=ù =ù
(252)

On the other hand we can calculate this integral directly (see Figure above)
ù
v H, wH = +% t + −% 2

w ù
v H, wH = % −
(253)

w =ù t 2

Therefore
% t − 2 =Ï t −Ï 2

Ï −Ï
%=
t 2
t− 2
(254)

The last relation allows to calculate the speed of propagation of the discontinuity and is known as the
Rankine-Hugoniot formula. In Fluid Mechanics analogous formula allows to determine the
shockwave speed.
It must be stressed again, that the assumption of the particular form of the solution (251) may not
be correct and in such cases also (254) is no longer valid.
Three special cases will now be considered, for which Rankine-Hugoniot formula will be evaluated:
a. Linear advection equation - Ï =U
%≡U (255)
(For linear equations the discontinuities travel along the characteristics)
b. For Burgers equation - Ï = X
/2
X X
− 22 t
+
%= 2 ≡
t 2
t− 2 2

57
Advanced Computational Fluid Dynamics Jacek Rokicki

c. For “more” nonlinear equation - Ï =



%= ≡ + +
t 2 X X
t−
t t 2 2
2

Which discontinuities are permanent?


We will consider now the Burgers equation (244) with an initial condition (IC) consisting of two
discontinuities (red solid line in the Figure below)
0 for H ≤ −1
† H = -1 for − 1 < H < 1
0 for H ≥ 1
(256)

In order to analyse whether the discontinuity is permanent, we shall consider a regularised 3 † H for
which discontinuities are replaced by a very steep linear functions (in the neighbourhood) – see
next Figure. The new initial condition (green broken line) is continuous and therefore a method of
characteristics can be used. It can be easily noticed that the “right” discontinuity quickly reappears
after time ∗
= 2 (see magenta broken line), while the left one is further smoothed out - the
inclination of the linear function drops down.
Further evolution of the former “left” discontinuity one can predict with the method of
characteristics. The “right” discontinuity is permanent and its movement can be described by
Rankine-Hugoniot relation (254).

H, =1

Figure 1

Therefore one may conclude that not all discontinuities are permanent, and as a consequence the
week formulation (249) may contain solutions that need to be eliminated (by some additional
argument). In Fluid Mechanics (for Euler equations) the identical phenomenon appears but there the

58
Advanced Computational Fluid Dynamics Jacek Rokicki

physical argument is used instead of the presented geometric/kinematic reasoning (only


compression shockwaves exist, the hypothetical rarefaction shockwaves violate the second law of
thermodynamics).
For Navier-Stokes equation this additional conditions are not necessary as physical dissipation
prevents from creation of unphysical solutions (the Navier-Stokes equations are not reversible in
time - in contrast to the inviscid Euler equations). Nevertheless for small viscosity (large Reynolds
number) the discretised Navier-Stokes equations (being underresolved on the insufficiently dense
mesh) as a rule require additional artificial dissipation to prevent creation of false shocks.
It should be noted in addition, that for the linear hyperbolic equations (with constant coefficient U)
all discontinuities are admissible and permanent (as in such case characteristics have same
inclination and do not cross).

Conservative vs. quasilinear formulation


We have shown (250)-(254) how to calculate the shock speed basing it on the conservative version of
the nonlinear advection equation

+ Ï = 0
H
(257)

This version is called conservative because it is the original form, when the equation is obtained from
(some) conservation law. In contrast, if the derivative with respect to H is further evaluated, the

Ï
nonconservative or quasilinear form is obtained

+o = 0 where o ≡
H
(258)

From numerical point of view it is important to know, (i) can this quasilinear version be discretised
and solved to give correct solution and in particular (ii) will the obtained shock/discontinuity speed
be the same as for the conservative version (257).
To investigate this latter question (in some indirect manner) we shall consider the Riemann problem
for the Burgers equation
X
+ S T=0
H 2
(259)

+
The speed of propagation of discontinuities is.

%=
t 2
2
(260)

The Burgers equation in the quasilinear form is

+ =0
H
(261)

59
Advanced Computational Fluid Dynamics Jacek Rokicki

the question remains, which value of shock speed can be associated with this equation (e.g., if solved
numerically) – will it be in particular (260), or will it be perhaps some different value?. To indirectly
answer this question we will multiply the quasilinear equation by
X
+ X
= 0 ⇔ S T+ S T = 0
H 2 H 3
(262)

and obtain different conservative equation


2
+ Þ /X
ß = 0 where = X
H 3
(263)

We can calculate the speed of propagation of discontinuities for this equation:

2 − 2 −
/X /X
%∗ = ≡
t 2 t 2
3 t− 3 X
− X
(264)
2 t 2

This formula is obviously different than (260) and thus, e.g., for t = 2 and 2 = 0 one obtains
% = 1 and %∗ = 4/3. This proves that the same quasilinear equation is equivalent to two different
conservative equations with two different shock speeds. Therefore it must be concluded that the
quasilinear equation, when discretised, will produce a solution with a false shock speed (and indeed
this is the case when we try to solve the nonlinear discretisation).
For more advanced problems when the stationary solution is sought, the nonconservative equation
will always produce the shock with a wrong intensity and location (this is a common observation for
the transonic solutions of the Euler equations). This effect is not large yet it adversely affects the
accuracy of computations, especially where drag estimation is concerned (shockwaves generate drag
dependent directly on their intensity). Therefore all present numerical codes for compressible flows
base on the conservative version of the Fluid Dynamic equations (be it Euler or Navier-Stokes).

Propagation of discontinuities (vector case)


We consider now the Riemann problem for the multidimensional nonlinear hyperbolic equation

+ =0 N 4N N, … ,
H
e
× , = ⋮ , = ⋮ ∈ ℝe
t for H
<0
H, = 0 = # 4e N, … ,
(265)

2 for H ≥ 0
e e

and its quasilinear form


4N 4N
G ⋯ M
+n =0 F N eL
H , n ≝ ≡F ⋮ ⋱ ⋮ L
for H <0
H, = 0 = # t
F 4e 4e L
(266)

2 for H ≥ 0 ⋯
E N eK

The matrix n depends on the solution , however in order to understand the nonlinear case we shall
consider now the Riemann problem for n = U¸§% ,i.e., for the linear vector hyperbolic equation
(221)

60
Advanced Computational Fluid Dynamics Jacek Rokicki

+n =0 N tN
H
2N
× = ⋮ , = ⋮ , = ⋮
t for H
<0 t 2
H, = 0 = #
(267)

2 for H ≥ 0
e te 2e

where the matrix n is diagonalisable


D = À¹ N ,¹ X ,…,¹ e Â
n = DΛD=N n¹ ) = 5) ¹ )
Λ = wÛo 5N , … , 5e )
(268)

If the analysis carried out earlier for scalar equation is repeated, we may obtain the analogue of the
Rankine-Hugoniot formula:
% t − 2 =n t − 2 (269)
Which indicates that discontinuities (shocks) may travel unchanged with a speed % only if the jump
vector t − 2 is an eigenvector ¹) of matrix n (and in such case % ≡ 5) ).
To further analyse this case we multiply now (as previously) the equation (267) by D =N and cary out
the following substitutions
tN 2N
= D =N t = D =N t ≡ ⋮ 2 = D =N 2 ≡ ⋮ (270)
te 2e

to obtain

+Λ =0 + λ5 =0
) )
H ⇔ H
t for H <0 t) for H <0
H, = 0 = # H, = 0 = #
2 for H ≥ 0
(271)

2) for H ≥ 0
)

According to (255) the solution is:


t) for H< 5)
H, = ( H, t) , 2) , 5) ≝#
2) for H ≥ 5)
) (272)

In the above ( H, o, , H∗ stands for the jump function (Heaviside like function), for which the
discontinuity appears at the point H∗ .
Finally the solution to the original problem can be expressed as
( H, tN , 2N , 5N
H, =D⋅ H, =D⋅ ⋮
( H, , 2e , 5e
(273)
te

Extension of this procedure to the fully nonlinear hyperbolic equation is difficult and requires further
analysis.

Example 5
We shall present now the explicit solution to the following Riemann initial value problem

61
Advanced Computational Fluid Dynamics Jacek Rokicki

1 −3
ï + =0
í −2 2 H
−1 for H < 0
H, = 0 = 6
î H, = 0 = Y
N
2 for H ≥ 0 Z ≡ 0( H, −1, 2, 0 4
(274)
í 1 for H < 2 ( H, 1, 3, 2
H, = 0 = 6
ì X
3 for H ≥ 2
The following properties of n are now recalled
1 −3 3 2
n= , 5N = −1, 5X = 4, ¹ N = , ¹ =
−2 2 2 N
−2
3 2 1 2 2
½= , ½ =N =
(275)

2 −2 10 2 −3
= ½ =N is
1 2( H, −1, 2, 0 + 2( H, 1, 3, 2
Therefore the initial condition for

H, = 0 = ½ =N H, = 0 = 0 4
10 2( H, −1, 2, 0 − 3( H, 1, 3, 2
(276)

H,
1 2( H, −1, 2, 1 + 2( H, 1, 3, 2 + 1
and the solution can be expressed as:

H, = 0 4
10 2( H, −1, 2, −4 − 3( H, 1, 3, 2 − 4
(277)

Returning to the original unknown function H, we finally get the exact analytical solution
H, =½⋅ H,
1 6( H, −1, 2, 1 + 6( H, 1, 3, 2 + 1 + 4( H, −1, 2, −4 − 6( H, 1, 3, 2 − 4 (278)
= 0 4
10 4( H, −1, 2, 1 + 4( H, 1, 3, 2 + 1 − 4( H, −1, 2, −4 + 6( H, 1, 3, 2 − 4

H, are presented in Figure below ( = 0 red solid line, = 2 green dashed line)
It can be verified that both the equation and the initial condition are fulfilled. Both components of

N X
X H, 0

H, 0 X H, 2
N

N H, 2

x x

62
Advanced Computational Fluid Dynamics Jacek Rokicki

19. Godunov's order barrier theorem

The monotonic, linear discretisation scheme for u0 + cu1 = 0 can be at most first order accurate.
Theorem

This negative result adds additional difficulty in development of useful discretisation formulas for
nonlinear hyperbolic systems (as obviously first order schemes are insufficiently accurate for
hydrodynamic simulation purposes).
The additional requirement of monotonicity brought up by this theorem is motivated by two factors:
• The nonlinearity of equation of interest (and spontaneous generation of discontinuities)
• The oscillations appearing on discontinuities for higher order formulas
The practical consequence of Godunov theorem is that all discretisation formulas for the hyperbolic
systems (e.g., Euler/Navier-Stokes equations) have to remain nonlinear, basing, e.g., either on the
nonlinear upwind schemes or on the nonlinear artificial viscosity schemes.

63
Advanced Computational Fluid Dynamics Jacek Rokicki

How to calculate ‖n‖• using the definition of the induced norm (83) ?
20. Annex 1

First we have the unit sphere and the operator acting on the sphere:
‖ ‖• ≡ max› j › = 1
j
e e

‖n ‖• ≡ max g›opj j › ≤ max g›opj ›


pkN,…,e pkN,…,e
jkN jkN
(1)

e
‖n ‖• ≤ max g›opj ›
pkN,…,e
jkN

We have shown that the norm is always smaller than some value. It is sufficient to show now that
there exist unit vector for which this lower limit is actually achieved (this will be the value of the
norm).
To show this we observe that, there must exist Û† such that
e e

max g›opj › = g›op7 j ›


pkN,…,e
(2)
jkN jkN

= Àsign op7 N , sign op7 X , … ,sign op7 e  , for which we have ‖ ∗ ‖• ≡ maxj › = 1.
!
We take now ∗ ∗j ›

We are now able to calculate:


e e e
‖n ∗ ‖• ≡ max g›opj ∗j › = ‖ ∗ ‖ g›op7 j › = g›op7 j ›
pkN,…,e
(3)
jkN jkN jkN

where the inequality was substituted by equality as all elements in the first sum are
actually positive ( ∗ was selected in such a way to achieve this effect). Therefore:
e e
‖ n ‖• ≝= sup ‖n ‖• = g›op7 j › = max g›opj ›
‖ã‖û kN pkN,…,e
(4)
jkN jkN

The above concludes the proof.

64
Advanced Computational Fluid Dynamics Jacek Rokicki

How to calculate ‖n‖• using the definition of the induced norm (83) ?
21. Annex 2

First we have the unit sphere and the operator acting on the sphere:
e
‖ ‖N ≡ g› j › = 1
jkN

e e e e
‖n ‖N ≡ g 8g opj j 8 ≤ g g›opj ›
pkN jkN pkN jkN
(5)

e
‖n ‖N ≤ max g›opj ›
jkN,…,e
pkN

We have shown that the norm is always smaller than some value. It is sufficient to show now that
there exist unit vector for which this lower limit is actually achieved (this will be the value of the
norm).
To show this we observe, that there must exist ²† such that
e e

max g›opj › = g›opj7 ›


jkN,…,e
(6)
jkN jkN

We take now ∗ = ¨j7 = &0, …,0, 1, 0, …0 '! , with the only nonzero entry in the ²† row. Therefore we
have ‖ ∗ ‖N ≡ ∑epkN› ∗j › = 1.
We are now able to calculate:
e
‖ ∗ ‖N ≡ g› ∗j › =1
jkN
e

‖n ∗ ‖N ≡ g›opj7 › = (7)
pkN
e e
‖n ‖N = max g›opj7 › = max g›opj › ∙ ‖ ∗ ‖N
jkN,…,e jkN,…,e
pkN pkN

where the inequality was substituted by equality as only one column of the matrix remains ( ∗ was
selected in such a way to achieve this effect). Therefore
e
‖n‖N = sup ‖n ‖N = max g›opj ›
‖ã‖y kN jkN,…,e
(8)
jkN

which concludes the proof.

65
Advanced Computational Fluid Dynamics Jacek Rokicki

How to calculate ‖n‖X using the definition of the induced norm (83) ?
22. Annex 3

‖ n ‖X ≝ sup ‖n ‖X = sup ‚ ln n
l
‖ã‖? kN ‖ã‖? kN
(9)

Matrix ¼ ≡ nl n = ¼l is, as shown earlier, Hermitian and non-negative, therefore its eigenvalues 5)
are real and non-negative, while eigenvectors ) are orthogonal (in our case even orthonormal)
and form the basis in ℝe or ℂe :
¼∙ ) = 5) ) , " = 1,2, … , § (10)
We will assume that:
0 ≤ 5N ≤ 5X ≤ ⋯ ≤ 5) ≤ ⋯ ≤ 5e = 5max ¼
• , •= l
= Î)¿
(11)
) ¿ ) ¿

Arbitrary ∈ ℝe or ℂe , ‖ ‖X = 1 can be expressed using basis functions:


e e

= g ^) , ‖ ‖X = g›^) › = 1
X
)
)kN )kN
e e

¼ = g ^) ¼ ) = g ^) 5) ) (12)
)kN )kN
e e
l
¼ = g ^) 5) ∙ • , ) • = g ^)X 5) ≤ 5max ¼
)kN )kN

as a consequence:
‖ n ‖X ≤ ‚ 5max ¼ (13)
We have shown that the norm is always smaller than some value. It is sufficient to show now that
there exist unit vector for which this lower limit is actually achieved (this lower limit will be the value
of the norm).
Suppose we take = e the eigenvector corresponding to the maximum eigenvalue 5max ¼ .
l
e ¼ e = 5max ¼ (14)
Instead of inequality we obtain now the required lower limit, and as a consequence:
‖ n ‖X = ‚ 5max ¼ = max √5
Ì∈spect ýþ ý

For Hermitian (symmetric real) matrices we have n = nl and 5) ¼ = 5X) n , and therefore:
‖ n ‖X = 5max n = max |5|
Ì∈spect ý
(15)

The second matrix norm ‖ n ‖X is difficult to calculate in general case, as it requires finding the
Remark:

maximum eigenvalue of the matrix n. Nevertheless in special cases (like for the discrete Poisson
operator for the square/perpendicular domain) this value is readily available.

66

You might also like