Topic 1. Adv Math
Topic 1. Adv Math
Module 1
Simplest method:
Always plot or graph to narrow your search for the real root
BRACKETTING METHODS
1. Bisection – oldest method; figured out by the Greeks in 1000 BC
2. Regula Falsi – 1700 (middle ages)
3. Advance methods (1970)
a. Pegasus
b. Kings
c. University of Illinois
Bracketting methods
- are based on two initial guesses that bracket the root. It always works
but converges slowly (more iterations are needed).
- Bisection, false position, graphing
Open methods
- involve one or more initial guesses but there is no need for them to
bracket the root.
- do not always work (since they can diverge) but when they do they
usually converge faster
- Secant method, Newton-Raphson, inverse quadratic interpolation
BISECTION METHOD
1. Choose an interval which contains the root – make sure there is only 1
root, if 2, change the interval
2. Evaluate the function f(x) at the midpoint
3. Determine which interval to discard
4. Go back to step 2
5. When tolerance is reduced, report the root as the midpoint of the interval
+ +midpoint -
Initial guess:
x1 + x 2
x❑= =2
2
2
f ( 2 )=2 −4=0(root is obtained)
Slowest because domain is only cut by 50% (not 99%)
Bisection – a technique that does not care about the problem.
6
5
4
3
2
1
0
0.5 1 1.5 2 2.5 3 3.5
-1
-2
-3
-4
In algebra: y=f ( x )
(If x is ChE 110, to pass ChE 110, you must have taken ChE 110)
2
x −4=0
2
x =4
x=√ 4 ( not allowed )
4
x= (allowed)
x
4
try 2 :2= =2
2
4 4
Try 3: x= x=
3 1.3333
x=1.333 x=3 chaos theory 2nd OE
Numerical Methods:
1. Direct Method (Gaussian Elimination)
- produces an answer to a problem in a fixed number of
computational steps
2. Iterative Method
- Produces a sequence of approximate answers (designed to
converge ever closer to the true solution, under the proper
condition)
b−a
s=b− f (b) b=x 1 a=x 2 s=x 3
f ( b )−f ( a )
5 x y
4
3 0 -1
2
1.2 -0.56
y
1
0
-1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1.4 0.784
-2
x
1.6 2.046
1.8 3.832
2.0 6
graph of y=x 3−2 and approximation line in the interval (1 , 2)
Begin the computation by giving the initial bounds on the zero: a=1 b=2
3
f (1)=1 −2=−1
3
f (2)=2 −2=6
f (b)(b−a) 6 (2−1)
s=b− =2− =1.143
f (b)−f (a) 6−(−1)
−3 −3
1.2584 2 1.2593 −7.2348 x 10 6 −2.9561 x 10
−3 −3
1.2593 2 1.2597 −2.9561 x 10 6 −1.0525 x 10
−3 −4
1.2597 2 1.2598 −1.0525 x 10 6 −5.7641 x 10
−4 −4
1.2598 2 1.2599 −5.7641 x 10 6 −1.0024 x 10
−4 −4
1.2599 2 1.2599 −1.0024 x 10 6 −1.0024 x 10
n
f (x)=f ( x ¿¿ i)+ f '(x i )( x i+1−x i)+ f (x )(xi )(x i+1−x i )+ ... ¿
f (x i )
x i+1=x i− (1) Newton’s method
f ' (x ¿¿ i)¿
Example:
2
f (x)=x −4
f ' (x)=2 x
2 2 2
x −4 2 x −x −4
x=x− =
2x 2x
4
1 x2 + 4 x+
= ( )= x (similar to fixed point)
2 x
2x
quadratic converging
+1 = 1 x 10−16
quadratic
regula falsi
linear - if far from the correct point combine Bisection with Newton
SECANT METHOD
f (x) dy
x=x− derivative is a slope ( )
f ' (x) dx
f ( x i )−f ( x i−1) y1−¿ y
replace f ' (x) with 0
¿
xi −xi −1 x1− x0
( x ¿ ¿ i−x i−1) fx i
x i+1= x i - ¿
f ( x i )−f (x i−1)
Example:
Determine the root of f ( x )=e−x −x using the Secant Method. Use starting
points X 0=0∧ X i=1
SECANT METHODS
- For derivatives that are difficult or inconvenient to evaluate, they can be
approximated by a backward finite divided difference.
~ f ( x i+ δ x i) −f (x i)
f ' (x i) ¿
δ xi
SECANT METHOD
- closely related to regula falsi, it results from a slight modification of the
latter
- Instead of choosing the subinterval that must contain the zero, from the
next approximation from the two most recently generated points:
x 1−x o
x 2=x 1− y
y 1− y o 1
Evaluate f(s)
If f(s) = 0, stop
advantage: derivative-free
PEGASUS METHOD
next iteration:
f i−1 f i
x i−1;
f i + f i+1
Newton - quadratic
Bisection - linear
ITERATIVE METHODS
[ x ] { x } = {b }
- to start the solution, make initial guess for x’s ; simple approach is
to assume that they are all zero
| |
j j−1
x i −x i
ε= × 100 %
xi j
j = present iteration
first, solve each of the equations for its unknown on the diagonal:
7.85+0.1 x 2 +0.2 x 3
x 1= (1)
3
−19.3−0.1 x 1+ 0.3 x3
x 2= (2)
7
71.4−0.3 x1 +0.2 x 2
x 3= (3)
10
7.85+0.1(0)+0.2(0)
x 1= =2.616667
3
−19.3−0.1(2.616667)+ 0.3(0)
x 2= =−2.794524
7
71.4−0.3(2.616667)+0.2(−2.794524)
x 3= =7.005610
10
7.85+0.1(−2.794524)+0.2 (7.005610)
x 1= =2.990557
3
The method is therefore conveying on the true solution; make additional iteration
For x 1:
2.990557−2.616667
ε a 1= ( 100 )=12.5 %
2.990557
ε a 2=11.8 %
ε a 3=0.076 %
JACOBI ITERATION
- Utilizes a somewhat different tactic.
- Rather than using the latest available x’s, this technique uses the equation
to compute a new set of x’s on the basis of a set of old x’s.
- Thus, as new values are generated, they are not immediately used but
rather are retained for the next iteration.
|aii|> ∑ |aij|
j =1
j ≠1
i.e., the absolute value of the diagonal coefficient in each of the equations
must be larger than the sum of the absolute value of the other coefficients
in the equation.
Example:
Use the Gauss-Seidel method to solve the following system until the percent
relative error falls below ε s=5 % :
10 X 1 +2 X 2− X 3=27
−3 X 1−6 X 2 +2 X 3=−61.5
X 1 +2 X 2 +5 X 3 =−21.5
27−2 X 2 + X 3
X1= equation (1)
10
−61.5+ 3 X 1−2 X 3
X2= equation(2)
6
−21.5−X 1 −X 2
X3= equation (3)
5