0% found this document useful (0 votes)
13 views

NUMERICAL INTEGRATION_Presented by_VANDANA VERMA

This presentation covers numerical integration, focusing on techniques such as the Newton-Cotes formulas, Trapezoidal rule, and Simpson's Rule, along with their error analysis. It highlights the importance of numerical integration in cases where integrands are known only at specific points or when finding antiderivatives is difficult. The document also discusses the error terms associated with these methods and provides mathematical formulations for approximating integrals.

Uploaded by

ddasv2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

NUMERICAL INTEGRATION_Presented by_VANDANA VERMA

This presentation covers numerical integration, focusing on techniques such as the Newton-Cotes formulas, Trapezoidal rule, and Simpson's Rule, along with their error analysis. It highlights the importance of numerical integration in cases where integrands are known only at specific points or when finding antiderivatives is difficult. The document also discusses the error terms associated with these methods and provides mathematical formulations for approximating integrals.

Uploaded by

ddasv2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

NUMERICAL

INTEGRATION
Overview of this presentation
• Introduction
• Motivation
• Newton Cotes Formula
• Trapezoidal rule, Simpson’s Rule and their
Error Analysis.
Introduction
Numerical integration is the approximate
computation of an integral using numerical
techniques. The numerical computation of an
integral is sometimes called quadrature.
Motivation to study Numerical Integration

• The integrand f(x) may be known only at certain points,


such as obtained by sampling. Some embedded systems
and other computer applications may need numerical
integration for this reason.

• A formula for the integrand may be known, but it may be


difficult or impossible to find an antiderivative.

• It may be possible to find an antiderivative symbolically,


but it may be easier to compute a numerical
approximation than to compute the antiderivative..
• The most straightforward numerical integration
technique uses the Newton-Cotes formulas
(also called quadrature formulas), which
approximate a function tabulated at a sequence
of regularly spaced intervals by various degree
polynomials. If the endpoints are tabulated,
then the 2- and 3-point formulas are called the
Trapezoidal rule and Simpson's rule,
respectively. A generalization of the trapezoidal
rule is Romberg integration, which can yield
accurate results for many fewer function
evaluations.
Numerical Integration
The general problem of numerical integration
is to find an approximate value of the integral
b
I  w( x) f ( x)dx
a

where w(x)>0 in [a,b] is the weight function.


We assume that w(x) and w(x)f(x) are
integrable, in the Reimann sense on [a,b]
The above integral is approximated by a finite
linear combination of values of f(x)in the form
b n
I  w( x) f ( x)dx  k f k
a
k 0

where xk , k=0(1)n are called the abscissas or


nodes distributed within the limits of
k ' s
integration [a,b] and , k=0(1)n are called
the weights of the integration rule or the
quadrature formula
The error of approximation is given as
b n
Rn  w( x) f ( x)dx 
a

k 0
k fk
Definition
An integration method is said to be order p , if
it produces exact results (Rn=0) for all
polynomials of degree less than or equal to p.
Methods based on Interpolation
Given the n+1 abcissa xk ‘s and the
corresponding values fk ‘s, the Lagrange
interpolating polynomial fitting the data ,
k=0(1)n, isn given by
 ( x)
f ( x)  lk ( x) f k  f n 1 ( ), x0    xn
k 0 (n  1)!
where lk(x) is the Lagrange fundamental
polynomial
n

( x  x ) i
lk ( x )  i 1
,
( x  xk ) '( x)
We replace the function f(x) by the interpolating
polynomial and integrate between the given limits
b
I w( x) f ( x)dx
a
n

n b
b ( x  x ) i
 [  w( x)lk ( x)dx] f k  w( x) i 1
f n 1 ( )dx
k 0
a
a
(n  1)!
n
 k f k  Rn
k 0
b
k  w( x)lk ( x)dx
a
b n
1
Rn  
(n  1)! a
w( x )i 1
( x  xi ) f n 1
( ) dx
Determination of the error term
If ( x) does not change its sign in [a,b] and f(n+1)
is continuous in[a,b], then using Mean Value
Theorem of Integral calculus, we can write the
error of approximation in the form
b
f n 1 ( )
Rn 
(n  1)! a w( x) ( x) dx,  ( a, b)
Determination of the error term
n

But if ( x  xi ) changes its sign in [a,b], then


i 1
error can be obtained in the following manner.
Since the method is exact for polynomials of
degree less than or equal to n , we have
c
Rn  f n 1 ( ) where
(n  1)!
b n
c w( x)x n 1dx   kk
 x
k 0
n 1

a
Newton Cotes Methods
When w(x)=1 and the nodes xk ‘s are
equispaced with x0=a, xn=b with spacing
h=(b-a)/n, the methods are called Newton
Cotes Integration methods. The weights k ' s
are called the Cotes Numbers.
Setting x=x0+sh, we get
n

 i
( x
i 1
 x ) h n 1
s ( s  1)( s  2).....( s  n)

( 1)
lk ( x )  s ( s  1)...........( s  k  1)( s  k  1)......( s  n)and
k !(n  k )!
n
( 1)
k  h s ( s  1)...........( s  k  1)( s  k  1)......( s  n)ds
k !(n  k )! 0
n
h n 2

( n 1)
Rn  s ( s  1)...........( s  n ) f ( )ds
(n  1)! 0
Trapezoidal Rule
For n=1, we have x0=a, x1=b, h=b-a
1
h
0  h ( s  1) ds  ,
0
2
1
h
1 h sds 
0
2
and we get the method
b
b a

a
f ( x)dx 
2
[ f (a )  f (b)]
Error in the trapezoidal rule becomes
1
h3
R1  s ( s  1) f (2) ( ) ds
2 0
Since s(s-1) does not change sign in [0,1], we
get
3 1
h (2) h3 (2) (b  a) 3 (2)
R1  f ( ) s( s  1)ds  f ( )  f ( ),  (0,1)
2 0
12 12
Thus , the trapezoidal rule is exact for
polynomials of degree less than or equal to 1
and hence is of order 1
Simpson’s Rule
For n=2, we have h=b-a, x0=a, x1=(a+b)/2,x2=b
We get
2
h h
0  ( s  1)( s  2) ds 
20 3
2
4h
1  h s ( s  2)ds 
0
3
2
h h
2  s ( s  1)ds 
20 3
and we get the method

b
b a a b

a
f ( x)dx 
6
[ f (a )  4 f (
2
)  f (b)]

which is called the Simpson’s Rule.


Error in Simpson’s Rule
4 2
h
R2  s ( s  1)( s  2) f (3) ( )d 
3! 0

Note that s(s-1)(s-2) changes its sign in (0,2).


So we use second method to obtain the
error term.
Since the method is exact for polynomials of
degree less than or equal to 2, we have
b
b a 3 a b 3 3
c x dx 
3
[a  4( )  b ] 0
a
6 2

This shows that the method is exact for


polynomials of degree three also, hence
the error term becomes
c 4
R2  f ( ),  (0, 2)
4!
b 5
b  a a  b ( b  a )
c x 4 dx  [a 4  4( ) 4  b 4 ] 
a
6 2 120

Therefore the error of approximation in the


Simpson’s rule becomes
5 5
(b  a ) 4 h 4
R2  f ( ),  (0, 2)  f ( )
2880 90
Thank You

You might also like