0% found this document useful (0 votes)
180 views

Gauss Quadrature 2

The document discusses different types of Gaussian quadrature rules for numerical integration. The Gauss rule uses nodes and weights that are the eigenvalues and squares of eigenvector elements of the Jacobi matrix. The Gauss-Radau rule extends this to include one prescribed endpoint, while the Gauss-Lobatto rule prescribes both endpoints. These rules provide exact integration for polynomials up to degree 2N-1, 2N, and 2N respectively. Theorems establish optimality properties and bounds on the integration error for functions with certain properties.

Uploaded by

rajaraman_1986
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views

Gauss Quadrature 2

The document discusses different types of Gaussian quadrature rules for numerical integration. The Gauss rule uses nodes and weights that are the eigenvalues and squares of eigenvector elements of the Jacobi matrix. The Gauss-Radau rule extends this to include one prescribed endpoint, while the Gauss-Lobatto rule prescribes both endpoints. These rules provide exact integration for polynomials up to degree 2N-1, 2N, and 2N respectively. Theorems establish optimality properties and bounds on the integration error for functions with certain properties.

Uploaded by

rajaraman_1986
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Gauss quadrature

Gérard MEURANT

October, 2008
1 Quadrature rules

2 The Gauss rule

3 The Gauss–Radau rule

4 The Gauss–Lobatto rule

5 Computation of the Gauss rules

6 The anti–Gauss rule

7 Nonsymmetric Gauss quadrature rules

8 The block Gauss quadrature rules


Quadrature rules

Given a measure α on the interval [a, b] and a function f , a


quadrature rule is a relation
Z b N
X
f (λ) dα = wj f (tj ) + R[f ]
a j=1

R[f ] is the remainder which is usually not known exactly


The real numbers tj are the nodes and wj the weights
The rule is said to be of exact degree d if R[p] = 0 for all
polynomials p of degree d and there are some polynomials q of
degree d + 1 for which R[q] 6= 0
I Quadrature rules of degree N − 1 can be obtained by
interpolation
I Such quadrature rules are called interpolatory
I Newton–Cotes formulas are defined by taking the nodes to be
equally spaced
I A popular choice for the nodes is the zeros of the Chebyshev
polynomial of degree N. This is called the Fejér quadrature
rule
I Another interesting choice is the set of extrema of the
Chebyshev polynomial of degree N − 1. This gives the
Clenshaw–Curtis quadrature rule
Theorem
Let k be an integer, 0 ≤ k ≤ N. The quadrature rule has degree
d = N − 1 + k if and only if it is interpolatory and
Z N
bY
(λ − tj )p(x) dα = 0, ∀p polynomial of degree ≤ k − 1.
a j=1

see Gautschi
If the measure is positive, k = N is maximal for interpolatory
quadrature since if k = N + 1 the condition in the last theorem
would give that the polynomial
N
Y
(λ − tj )
j=1

is orthogonal to itself which is impossible


Gauss quadrature rules

The optimal quadrature rule of degree 2N − 1 is called a Gauss


quadrature
It was introduced by C.F. Gauss at the beginning of the nineteenth
century
The general formula for a Riemann–Stieltjes integral is
Z b N
X M
X
I [f ] = f (λ) dα(λ) = wj f (tj ) + vk f (zk ) + R[f ], (1)
a j=1 k=1

where the weights [wj ]N M N


j=1 , [vk ]k=1 and the nodes [tj ]j=1 are
unknowns and the nodes [zk ]M k=1 are prescribed

see Davis and Rabinowitz; Gautschi; Golub and Welsch


I If M = 0, this is the Gauss rule with no prescribed nodes
I If M = 1 and z1 = a or z1 = b we have the Gauss–Radau rule
I If M = 2 and z1 = a, z2 = b, this is the Gauss–Lobatto rule
The term R[f ] is the remainder which generally cannot be
explicitly computed
If the measure α is a positive non–decreasing function
 2
M
b Y N
f (2N+M) (η)
Z Y
R[f ] = (λ−zk )  (λ − tj ) dα(λ), a<η<b
(2N + M)! a k=1 j=1
(2)
Note that for the Gauss rule, the remainder R[f ] has the sign of
f (2N) (η)
see Stoer and Bulirsch
The Gauss rule

How do we compute the nodes tj and the weights wj ?

I One way to compute the nodes and weights is to use


f (λ) = λi , i = 1, . . . , 2N − 1 and to solve the non linear
equations expressing the fact that the quadrature rule is exact
I Use of the orthogonal polynomials associated with the
measure α

Z b
pi (λ)pj (λ) dα(λ) = δi,j
a
P(λ) = [p0 (λ) p1 (λ) · · · pN−1 (λ)]T , e N = (0 0 · · · 0 1)T

λP(λ) = JN P(λ) + γN pN (λ)e N


 
ω1 γ1
 γ1 ω2 γ2 
 
JN = 
 .. .. .. 
 . . . 

 γN−2 ωN−1 γN−1 
γN−1 ωN
JN is a Jacobi matrix, its eigenvalues are real and simple
Theorem
(N)
The eigenvalues of JN (the so–called Ritz values θj which are
also the zeros of pN ) are the nodes tj of the Gauss quadrature rule.
The weights wj are the squares of the first elements of the
normalized eigenvectors of JN
Proof.
The monic polynomial N
Q
j=1 (λ − tj ) is orthogonal to all
polynomials of degree less than or equal to N − 1. Therefore, (up
to a multiplicative constant) it is the orthogonal polynomial
associated to α and the nodes of the quadrature rule are the zeros
of the orthogonal polynomial, that is the eigenvalues of JN
The vector P(tj ) is an unnormalized eigenvector of JN
corresponding to the eigenvalue tj
If q is an eigenvector with norm 1, we have P(tj ) = ωq with a
scalar ω. From the Christoffel–Darboux relation

wj P(tj )T P(tj ) = 1, j = 1, . . . , N

Then
wj P(tj )T P(tj ) = wj ω 2 kqk2 = wj ω 2 = 1
Hence, wj = 1/ω 2 . To find ω we can pick any component of the
eigenvector q, for instance, the first one which is different from
zero ω = p0 (tj )/q1 = 1/q1 . Then, the weight is given by

wj = q12

If the integral of the measure is not 1


Z b
wj = q12 µ0 = q12 dα(λ)
a
The knowledge of the Jacobi matrix and of the first moment allows
to compute the nodes and weights of the Gauss quadrature rule
Golub and Welsch showed how the squares of the first components
of the eigenvectors can be computed without having to compute
the other components with a QR–like method
Z b N
X
I [f ] = f (λ) dα(λ) = wjG f (tjG ) + RG [f ]
a j=1

with  2
b N
f (2N) (η)
Z Y
RG [f ] =  (λ − tjG ) dα(λ)
(2N)! a j=1
QN
The monic polynomial j=1 (tjG − λ) which is the determinant χN
of JN − λI can be written as γ1 · · · γN−1 pN (λ)
Theorem
Suppose f is such that f (2n) (ξ) > 0, ∀n, ∀ξ, a < ξ < b, and let
N
X
LG [f ] = wjG f (tjG )
j=1

The Gauss rule is exact for polynomials of degree less than or


equal to 2N − 1 and
LG [f ] ≤ I [f ]
Moreover ∀N, ∃η ∈ [a, b] such that

f (2N) (η)
I [f ] − LG [f ] = (γ1 · · · γN−1 )2
(2N)!
The Gauss–Radau rule

To obtain the Gauss–Radau rule, we have to extend the matrix JN


in such a way that it has one prescribed eigenvalue z1 = a or b
Assume z1 = a. We wish to construct pN+1 such that pN+1 (a) = 0

0 = γN+1 pN+1 (a) = (a − ωN+1 )pN (a) − γN pN−1 (a)

This gives
pN−1 (a)
ωN+1 = a − γN
pN (a)
Note that
(JN − aI )P(a) = −γN pN (a)e N
Let δ(a) = [δ1 (a), · · · , δN (a)]T with

pl−1 (a)
δl (a) = −γN l = 1, . . . , N
pN (a)

This gives ωN+1 = a + δN (a) and δ(a) satisfies

(JN − aI )δ(a) = γN2 e N

I we generate γN
I we solve the tridiagonal system for δ(a), this gives δN (a)
I we compute ωN+1 = a + δN (a)
γN e N
 
JN
ĴN+1 =
γN (e N )T ωN+1
gives the nodes and the weights of the Gauss–Radau quadrature
rule
Theorem
Suppose f is such that f (2n+1) (ξ) < 0, ∀n, ∀ξ, a < ξ < b. Let
N
X
UGR [f ] = wja f (tja ) + v1a f (a)
j=1

wja , v1a , tja being the weights and nodes computed with z1 = a and
let LGR
N
X
LGR [f ] = wjb f (tjb ) + v1b f (b)
j=1

wjb , v1b , tjb being the weights and nodes computed with z1 = b.
The Gauss–Radau rule is exact for polynomials of degree less than
or equal to 2N and we have

LGR [f ] ≤ I [f ] ≤ UGR [f ]
Theorem (end)
Moreover ∀N ∃ ηU , ηL ∈ [a, b] such that
 2
b N
f (2N+1) (ηU )
Z Y
I [f ] − UGR [f ] = (λ − a)  (λ − tja ) dα(λ)
(2N + 1)! a j=1

 2
b N
f (2N+1) (ηL )
Z Y
I [f ] − LGR [f ] = (λ − b)  (λ − tjb ) dα(λ)
(2N + 1)! a j=1
The Gauss–Lobatto rule
We would like to have

pN+1 (a) = pN+1 (b) = 0

Using the recurrence relation


    
pN (a) pN−1 (a) ωN+1 a pN (a)
=
pN (b) pN−1 (b) γN b pN (b)

Let
pl−1 (a) pl−1 (b)
δl = − , µl = − , l = 1, . . . , N
γN pN (a) γN pN (b)

then
(JN − aI )δ = e N , (JN − bI )µ = e N
    
1 −δN ωN+1 a
=
1 −µN γN2 b

I we solve the tridiagonal systems for δ and µ, this gives δN and


µN
I we compute ωN+1 and γN

γN e N
 
JN
ĴN+1 =
γN (e N )T ωN+1
Theorem
Suppose f is such that f (2n) (ξ) > 0, ∀n, ∀ξ, a < ξ < b and let
N
X
UGL [f ] = wjGL f (tjGL ) + v1GL f (a) + v2GL f (b)
j=1

tjGL , wjGL , v1GL and v2GL being the nodes and weights computed
with a and b as prescribed nodes. The Gauss–Lobatto rule is exact
for polynomials of degree less than or equal to 2N + 1 and

I [f ] ≤ UGL [f ]

Moreover ∀N ∃ η ∈ [a, b] such that


 2
b N
f (2N+2) (η)
Z Y
I [f ]−UGL [f ] = (λ−a)(λ−b)  (λ − tjGL ) dα(λ)
(2N + 2)! a j=1
Computation of the Gauss rules
The weights wi are given by the squares of the first components of
the eigenvectors wi = (z1i )2 = ((e 1 )T z i )2
Theorem
N
X
wl f (tl ) = (e 1 )T f (JN )e 1
l=1

Proof.
N
X N
X
wl f (tl ) = (e 1 )T z l f (tl )(z l )T e 1
l=1 l=1
N
!
X
1 T l l T
= (e ) z f (tl )(z ) e1
l=1
= (e ) ZN f (ΘN )ZNT e 1
1 T

= (e 1 )T f (JN )e 1
The anti–Gauss rule
A usual way of obtaining an estimate of I [f ] − LN
G [f ] is to use
another quadrature rule Q[f ] of degree greater than 2N − 1 and to
estimate the error as Q[f ] − LN
G [f ]
Laurie proposed to construct a quadrature rule with N + 1 nodes
called an anti–Gauss rule
N+1
X
H N+1 [f ] = $j f (ϑj ),
j=1

such that
I [p] − H N+1 [p] = −(I [p] − LN
G [p])

for all polynomials of degree 2N + 1. Then, the error of the Gauss


rule can be estimated as
1 N+1
(H [f ] − LN
G [f ])
2
H N+1 [p] = 2I [p] − LN
G [p]

for all polynomials p of degree 2N + 1. Hence, H N+1 is a Gauss


rule with N + 1 nodes for the functional I(·) = 2I [·] − LN
G [·]
We have
I [pq] = I(pq)
for p a polynomial of degree N − 1 and q a polynomial of degree
N and
2 2
I(p̃N ) = 2I (p̃N )
where p̃j are the orthogonal polynomials associated to I
Using the Stieltjes formulas for the coefficients we obtain the
Jacobi matrix
 
ω1 γ1
 γ1 ω2 γ2 
 
.. .. ..
. . .
 
J̃N+1 =
 


 γN−2 ωN−1 γN−1 √


 γN−1 √ωN 2γN 
2γN ωN+1

The anti–Gauss nodes ϑj , j = 2, . . . , N are inside the integration


interval
However, the first and the last nodes can eventually be outside of
the integration interval
Actually, in some cases, the matrix J̃N+1 can be indefinite even if
JN is positive definite
One can construct a quadrature rule S N+1 [f ] such that

I [p] − S N+1 [p] = −γ(I [p] − LN


G [p])

for all polynomials of degree 2N + 1. The parameter γ is positive


and less than or equal to 1
 
ω1 γ1
 γ1 ω2 γ2 
 
 .. .. .. 
J̃N+1 = 
 . . . 

 γN−2 ωN−1 γN−1 
 √ 
 γN−1 ωN γN 1 + γ 

γN 1 + γ ωN+1

The error of the Gauss rule can be estimated as


1
(S N+1 [f ] − LN
G [f ])
1+γ
Nonsymmetric Gauss quadrature rules

We consider the case where the measure α can be written as


l
X
α(λ) = αk δk , λl ≤ λ < λl+1 , l = 1, . . . , N − 1
k=1

where αk 6= δk and αk δk ≥ 0
We assume that there exists two sequences of mutually orthogonal
(sometimes called bi–orthogonal) polynomials p and q such that

γj pj (λ) = (λ − ωj )pj−1 (λ) − βj−1 pj−2 (λ), p−1 (λ) ≡ 0, p0 (λ) ≡ 1


βj qj (λ) = (λ − ωj )qj−1 (λ) − γj−1 qj−2 (λ), q−1 (λ) ≡ 0, q0 (λ) ≡ 1

with hpi , qj i = 0, i 6= j
Let
P(λ)T = [p0 (λ) p1 (λ) · · · pN−1 (λ)]
Q(λ)T = [q0 (λ) q1 (λ) · · · qN−1 (λ)]
and  
ω1 γ1
β1 ω2 γ2 
 
JN = 
 .. .. .. 
 . . . 

 βN−2 ωN−1 γN−1 
βN−1 ωN
In matrix form

λP(λ) = JN P(λ) + γN pN (λ)e N


λQ(λ) = JNT Q(λ) + βN qN (λ)e N
Proposition

βj · · · β1
pj (λ) = qj (λ)
γj · · · γ1

Hence, qN is a multiple of pN and the polynomials have the same


roots which are also the common real eigenvalues of JN and JNT
We define the quadrature rule as
Z b N
X
f (λ) dα(λ) = f (θj )sj tj + R[f ]
a j=1

where θj is an eigenvalue of JN , sj is the first component of the


eigenvector uj of JN corresponding to θj and tj is the first
component of the eigenvector vj of JNT corresponding to the same
eigenvalue, normalized such that vjT uj = 1
Theorem
Assume that γj βj 6= 0, then the nonsymmetric Gauss quadrature
rule is exact for polynomials of degree less than or equal to 2N − 1
The remainder is characterized as

f (2N) (η) b
Z
R[f ] = pN (λ)2 dα(λ)
(2N)! a

The extension of the Gauss–Radau and Gauss–Lobatto rules to the


nonsymmetric case is almost identical to the symmetric case
The block Gauss quadrature rules

Rb
The integral a f (λ)dα(λ) is now a 2 × 2 symmetric matrix. The
most general quadrature formula is of the form
Z b N
X
f (λ)dα(λ) = Wj f (Tj )Wj + R[f ]
a j=1

where Wj and Tj are symmetric 2 × 2 matrices. This can be


reduced to
X2N
f (tj )u j (u j )T
j=1

where tj is a scalar and u j is a vector with two components


There exist orthogonal matrix polynomials related to α such that

λpj−1 (λ) = pj (λ)Γj + pj−1 (λ)Ωj + pj−2 (λ)ΓT


j−1

p0 (λ) ≡ I2 , p−1 (λ) ≡ 0


This can be written as

λ[p0 (λ), . . . , pN−1 (λ)] = [p0 (λ), . . . , pN−1 (λ)]JN +[0, . . . , 0, pN (λ)ΓN ]

where
Ω1 ΓT
 
1
 Γ1 Ω2 ΓT 
 2 
JN = 
 .. .. .. 
. . . 
 T

 ΓN−2 ΩN−1 ΓN−1 
ΓN−1 ΩN
is a symmetric block tridiagonal matrix of order 2N
The nodes tj are the zeros of the determinant of the matrix
orthogonal polynomials that is the eigenvalues of JN and ui is the
vector consisting of the two first components of the corresponding
eigenvector
However, the eigenvalues may have a multiplicity larger than 1
Let θi , i = 1, . . . , l be the set of distinct eigenvalues and ni their
multiplicities. The quadrature rule is then
 
l ni
 (w j )(w j )T  f (θi )
X X
i i
i=1 j=1

The block quadrature rule is exact for polynomials of degree less


than or equal to 2N − 1 but the proof is rather involved
The block Gauss–Radau rule

We would like a to be a double eigenvalue of JN+1

JN+1 P(a) = aP(a) − [0, . . . , 0, pN+1 (a)ΓN+1 ]T

apN (a) − pN (a)ΩN+1 − pN−1 (a)ΓT


N =0

If pN (a) is non singular

ΩN+1 = aI2 − pN (a)−1 pN−1 (a)ΓT


N

But
−p0 (a)T pN (a)−T
   
0
(JN − aI )  .
.   .. 
= . 

.
T
−pN−1 (a) pN (a) −T ΓTN
I We first solve
   
δ0 (a) 0
.
.. . 
(JN − aI )   =  .. 
  

δN−1 (a) ΓT
N

I We compute
ΩN+1 = aI2 + δN−1 (a)T ΓT
N
The block Gauss–Lobatto rule
The generalization of the Gauss–Lobatto construction to the block
case is a little more difficult
We would like to have a and b as double eigenvalues of the matrix
JN+1
It gives
−1
    
I2 pN (a)pN−1 (a) ΩN+1 aI2
−1 =
I2 pN (b)pN−1 (b) ΓT
N bI2

Let δ(λ) be the solution of

(JN − λI )δ(λ) = (0 . . . 0 I2 )T

Then, as before

δN−1 (λ) = −pN−1 (λ)T pN (λ)−T Γ−T


N
Solving the 4 × 4 linear system we obtain
−1
ΓT
N ΓN = (b − a)(δN−1 (a) − δN−1 (b))

Thus, ΓN is given as a Cholesky factorization of the right hand


side matrix which is positive definite because δN−1 (a) is a diagonal
block of the inverse of (JN − aI )−1 which is positive definite and
−δN−1 (b) is the negative of a diagonal block of (JN − bI )−1 which
is negative definite
From ΓN , we compute

ΩN+1 = aI2 + ΓN δN−1 (a)ΓT


N
Computation of the block Gauss rules

Theorem
2N
X
f (ti )ui uiT = e T f (JN )e
i=1

where e T = (I2 0 . . . 0)
P.J. Davis and P. Rabinowitz, Methods of numerical
integration, Second Edition, Academic Press, (1984)
W. Gautschi, Orthogonal polynomials: computation and
approximation, Oxford University Press, (2004)
G.H. Golub and G. Meurant, Matrices, moments and
quadrature, in Numerical Analysis 1993, D.F. Griffiths and
G.A. Watson eds., Pitman Research Notes in Mathematics,
v 303, (1994), pp 105–156
G.H. Golub and J.H. Welsch, Calculation of Gauss
quadrature rules, Math. Comp., v 23, (1969), pp 221–230
D.P. Laurie, Anti–Gaussian quadrature formulas,
Math. Comp., v 65 n 214, (1996), pp 739–747
J. Stoer and R. Bulirsch, Introduction to numerical
analysis, second edition, Springer Verlag, (1983)

You might also like