0% found this document useful (0 votes)
14 views18 pages

Class - NM - Chapter 4 - S20241

Chapter 3 discusses numerical methods for solving first-order initial value problems (IVPs), including Euler's method, Improved Euler's method, and the Runge-Kutta method. It provides algorithms, examples, and error estimates for each method, highlighting their accuracy and efficiency. The chapter also introduces multistep predictor-corrector methods and specific examples of the Adams-Bashforth-Moulton methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views18 pages

Class - NM - Chapter 4 - S20241

Chapter 3 discusses numerical methods for solving first-order initial value problems (IVPs), including Euler's method, Improved Euler's method, and the Runge-Kutta method. It provides algorithms, examples, and error estimates for each method, highlighting their accuracy and efficiency. The chapter also introduces multistep predictor-corrector methods and specific examples of the Adams-Bashforth-Moulton methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Chapter 3

Numerical Methods to Solve First-Order IVPs

MAT2384X - Spring/Summer 2024


1 INITIAL VALUE PROBLEMS 2

1 Initial Value Problems

Consider the first-order initial value problem:

y ′ = f (x, y), y(x0 ) = y0 .

To find an approximation to the solution y(x) of y ′ = f (x, y), y(x0 ) = y0 on the interval
a ≤ x ≤ b, we choose N + 1 distinct points, x0 , x1 , . . . , xN , such that a = x0 < x1 < x2 <
. . . < xN = xb , and construct approximations yn to y(xn ), n = 0, 1, . . . , N .

In considering numerical methods for the solution of y ′ = f (x, y), y(x0 ) = y0 we shall use
the following notation:

ˆ h > 0 denotes the integration step size

ˆ xn = x0 + nh is the n-th node

ˆ y(xn ) is the exact solution at xn

ˆ yn is the numerical solution at xn

ˆ fn = f (xn , yn ) is the numerical value of f (x, y) at (xn , yn )


2 EULER’S AND IMPROVED EULER’S METHODS 3

2 Euler’s and Improved Euler’s Methods

2.1 Euler’s Method

To find an approximation to the solution y(x) of y ′ = f (x, y), y(x0 ) = y0 on the interval
a ≤ x ≤ b, we choose N + 1 distinct points, x0 , x1 , . . . , xN , such that a = x0 < x1 < x2 <
. . . < xN = b and set h = (xN − x0 )/N .

The algorithm for Euler’s method is as follows.

(1) Choose h such that N = (xN − x0 )/h is an integer.

(2) Given y0 , for n = 0, 1, . . . , N , iterate the scheme

yn+1 = yn + hf (xn , yn ) (one-step method).

Then, yn is an approximation of y(xn ).


2 EULER’S AND IMPROVED EULER’S METHODS 4

Remark 2.1. This corresponds to making a piecewise linear approximation to the solution
as we are assuming constant slope on each subinterval [xi , xi+1 ], based on the left endpoint.

We should be aware that this approximation, in general, is not very accurate unless h is very
small. The error of the method is approximately proportional to h, i.e. Euler’s method is of
order one. So this usually requires doing a very large number of steps.

Example 2.1. Use Euler’s method with h = 0.1 to approximate the solution to the initial
value problem
y ′ (x) = e−y , y(0) = 0,
over the interval 0 ≤ x ≤ 1.
2 EULER’S AND IMPROVED EULER’S METHODS 5

n xn yn y(xn ) Error
0 0 0 0 0
1 0.1 0.1 0.095310 0.004690
2 0.2 0.190489 0.182322 0.008167
3 0.3 0.273140 0.232364 0.010776
4 0.4 0.349239 0.336472 0.013067
5 0.5 0.419761 0.405465 0.014296
6 0.6 0.485481 0.470004 0.015477
7 0.7 0.547021 0.530628 0.016393
8 0.8 0.604888 0.587787 0.017101
9 0.9 0.659502 0.641854 0.017648
10 1.0 0.711213 0.693147 0.018066
2 EULER’S AND IMPROVED EULER’S METHODS 6

Example 2.2. Use Euler’s method to approximate the solution to

y ′ = x + y, y(0) = 0.

Use h = 0.1, and do 5 steps.

n xn yn y(xn ) Relative Error


0 0 0 0 0
1 0.1 0 0.005171 100
2 0.2 0.01 0.021403 53.3
3 0.3 0.031 0.049859 37.8
4 0.4 0.0641 0.091825 30.2
5 0.5 0.11051 0.148271 25.5
2 EULER’S AND IMPROVED EULER’S METHODS 7

Remark 2.2. It is generally incorrect to say that by taking h sufficiently small one can
obtain any desired level of precision, that is, get yn as close to y(xn ) as one wants. As the
step size h decreases, at first the truncation error of the method decreases, but as the number
of steps increases, the number of arithmetic operations increases, and, hence, the roundoff
errors increase.

2.2 Improved Euler’s Method

The improved Euler’s method takes the average of the slopes at the left and right ends of
each step. It is, here, formulated in terms of a predictor and a corrector:
P
yn+1 = ynC + hf (xn , ynC ),

C 1
= ynC + h f (xn , ynC ) + f (xn+1 , yn+1
P

yn+1 ) .
2
This method is of order 2.

Remark 2.3. Notice that the second formula or the corrector is an implicit function of yn+1 .
However the formula has order two and will give better results than Euler’s method and so we
would like to use it. To get around the problem “we have to know yn+1 to calculate yn+1 ”, we
use Euler’s method to predict a value of yn+1 which is then plugged into the implicit formula
to calculate a better or corrected value.
2 EULER’S AND IMPROVED EULER’S METHODS 8

Example 2.3. Use Improved Euler’s method to approximate the solution to

y ′ = x + y, y(0) = 0.

Use h = 0.1, and do 5 steps.

n xn ynP ynC y(xn ) Relative Error


0 0 0 0 0
1 0.1 0 0.005 0.005171 3.31
2 0.2 0.0155 0.021025 0.021403 1.77
3 0.3 0.043128 0.049233 0.049859 1.26
4 0.4 0.0084156 0.090902 0.091825 1.01
5 0.5 0.139992 0.147447 0.148271 0.56
3 MULTISTEP PREDICTOR-CORRECTOR METHODS 9

3 Multistep Predictor-Corrector Methods

3.1 General Multistep Methods

Consider the initial value problem

y ′ = f (x, y), y(a) = η,

where f (x, y) is continuous with respect to x and the exact solution, y(x), exists and is
unique on [a, b].

We look for an approximate numerical solution {yn } at the nodes xn = a + nh where h is


the step size and n = (b − a)/h.

For this purpose, we consider the k-step linear method:


k
X k
X
αj yn+j = h βj fn+j ,
j=0 j=0

where yn ∼ y(xn ) and fn := f (xn , yn ). We normalize the method by the condition αk = 1


and insist that the number of steps be exactly k by imposing the condition

(α0 , β0 ) ̸= (0, 0).


3 MULTISTEP PREDICTOR-CORRECTOR METHODS 10

3.2 Adams-Bashforth-Moulton Methods of orders 3 and 4

As a first example of multistep methods, we consider the three-step Adams-Bashforth-


Moulton method of order 3, given by the formula pair:

P h
yn+1 = ynC + (23fnC − 16fn−1
C C
+ 5fn−2 ), fkC = f (xk , ykC )
12

C h
yn+1 = ynC + P
(5fn+1 + 8fnC − fn−1
C
), fkP = f (xk , ykP ),
12
with local error estimate
1 C P
Error ≈ − (y − yn+1 ).
10 n+1

As a second and better known example of multistep methods, we consider the four-step
Adams-Bashforth-Moulton method of order 4. The Adams-Bashforth predictor and the
Adams-Moulton corrector of order 4 are

P h
yn+1 = ynC + (55fnC − 59fn−1
C C
+ 37fn−2 C
− 9fn−3 )
24
and
C h
yn+1 = ynC + P
(9fn+1 + 19fnC − 5fn−1
C C
+ fn−2 ),
24
where
fnC = f (xn , ynC ) and fnP = f (xn , ynP ).

Starting values are obtained with a Runge-Kutta method or otherwise.


4 RUNGE-KUTTA METHOD 11

4 Runge-Kutta Method

4.1 Fourth-Order Runge-Kutta Method

The fourth-order Runge-Kutta method (also known as the classic Runge-Kutta method or
sometimes just as the Runge-Kutta method) is very popular amongts the explicit one-step
methods.

The (classic) four-stage Runge-Kutta method of order 4 is given by its formula

k1 = hf (xn , yn )
 
1 1
k2 = hf xn + h, yn + k1
2 2
 
1 1
k3 = hf xn + h, yn + k2
2 2
k4 = hf (xn + h, yn + k3 )
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6

Essentially, the method is making four ∆y estimates based on slopes at the left end, midpoint,
and right end of the subinterval. A weighted average of these ∆y estimates, the two midpoint
estimates more than those at the left and right ends, is addes to the previous y value.
4 RUNGE-KUTTA METHOD 12

Example 4.1. Use classic Runge-Kutta method to approximate the solution to

y ′ = x + y, y(0) = 0.

Use h = 0.1, and do 5 steps.

n xn yn y(xn ) Error
1 0.1 0.5170833334e−2 0.5170918e−2 0.84666e−7
2 0.2 0.2140257085e−1 0.21402758e−1 0.18715e−6
3 0.3 0.4985849705e−1 0.49858808e−1 0.31095e−6
4 0.4 0.9182424006e−1 0.91824698e−1 0.45794e−6
5 0.5 0.1487206385 0.148721271 0.6324e−6
6 0.6 0.2221179620 0.222118800 0.8380e−6
7 0.7 0.3137516264 0.313752707 0.10806e−5
8 0.8 0.4255395630 0.425540928 0.13650e−5
9 0.9 0.5596014135 0.559603111 0.16975e−5
10 1.0 0.7182797438 0.718281828 0.20842e−5
4 RUNGE-KUTTA METHOD 13

Example 4.2. Use Euler, Improved Euler, and classic Runge-Kutta to approximate y1 to

y ′ = 4xy, y(0) = 2.

Use h = 0.1.
4 RUNGE-KUTTA METHOD 14

Example 4.3. Consider the initial value problem:

y ′ = 2x(1 + y 2 ), y(0) = 0.

Approximate the solution of this IVP on the interval [0, 1] with step size h = 0.1 using Euler’s
method, the Improved Euler’s method, and the Runge-Kutta method of order 4.

xn y(xn ) Euler Error Imp. Euler Error Runge-Kutta Error


0.1 0.0100 0 0.0100 0.0100 0 0.0100 0
0.2 0.0400 0.0200 0.0200 0.0400 0 0.0400 0
0.3 0.0902 0.0600 0.0302 0.0902 0 0.0902 0
0.4 0.1614 0.1202 0.0412 0.1614 0 0.1614 0
0.5 0.2553 0.2014 0.0539 0.2554 0.0001 0.2553 0
0.6 0.3764 0.3054 0.0710 0.3765 0.0001 0.3764 0
0.7 0.5334 0.4366 0.0968 0.5335 0.0001 0.5334 0
0.8 0.7445 0.6033 0.1412 0.7441 0.0004 0.7446 0.0001
0.9 1.0505 0.8216 0.2289 1.0471 0.0034 1.0505 0
1.0 1.5574 1.1231 0.4343 1.5387 0.0187 1.5574 0
4 RUNGE-KUTTA METHOD 15

Example 4.4. Consider the initial value problem:

y ′ = e−y , y(0) = 0.

Approximate to 6 decimals the solution of this IVP on the interval [0, 1] with step size h = 0.1
using Euler’s method, the Improved Euler’s method, and the Runge-Kutta method of order
4.
4 RUNGE-KUTTA METHOD 16
4 RUNGE-KUTTA METHOD 17
4 RUNGE-KUTTA METHOD 18

You might also like