Numerical Methods
Numerical Methods
Gaussian Elimination
Types of Solutions
xyz 1 x1 2x2 x 4 1 3x 4y 7z 7
2x 3y z 2 x2 2x3 x 4 x5 2 x 2y z 1
x 2y 2z 4 x2 4x3 x5 4
xyz 1 x1 2x2 x 4 1 3x 4y 7z 7
2x 3y z 2 x2 2x3 x 4 x5 2 x 2y z 1
x 2y 2z 4 x2 4x3 x5 4
Any system of linear equations can be put into matrix form: Ax b
The matrix A contains the coefficients of the variables, and the vector x has
the variables as its components.
For example, for the first system above the matrix version would be:
1 1 1 x 1
2 3 1 y 2
1 2 2 z 4
A x b Prepared by Vince Zaccone
For Campus Learning Assistance
Services at UCSB
Before diving into larger systems we will look at some familiar 2-variable cases. If the
equation has two variables we think of one of them as being dependent on the other. Thus we
have only one independent variable. A one-dimensional object is a line, so solutions to these
two-equation systems can be thought of as the intersection points of two lines. We will
generalize this concept when dealing with larger systems.
Consider the following sets of equations:
xy 1
2x y 2
- 10
- 10
4x 2y 2
2x y 2
- 10
4x 2y 2 No Solution 3
-1
- 10
4x 2y 2 No Solution 3
-1
4x 2y 4
2x y 2
- 10
4x 2y 2 No Solution 3
-1
-2
No Solution
xyz 1
2x 3y z 2 If the system is inconsistent there will be no solutions.
x 2y 2z 4 In this case there will be a contradiction that appears
during the solution process.
We will use a procedure called Gaussian Elimination to solve systems such as these.
1 1 1 2
1 1 0 5
1 2 1 4
1 1 1 2 1 1 1 2
*
1 1 0 5 R 2 R 2 (1) R1 0 2 1 3
1 2 1 4 R * 0 3 2 2
3 R 3 (1) R1
1 1 1 2 1 1 1 2
*
1 1 0 5 R 2 R 2 (1) R1 0 2 1 3
1 2 1 4 R * 0 3 2 2
3 R 3 (1) R1
For the next step we want to get a zero in the bottom-middle spot. For hand-calculations I
like to keep entries integers as much as possible, so I would do 2 steps here-multiply row 3
by 2, then add (-3) times row 2. This can be done together:
1 1 1 2 1 1 1 2
*
1 1 0 5 R 2 R 2 (1) R1 0 2 1 3
1 2 1 4 R * 0 3 2 2
3 R 3 (1) R1
For the next step we want to get a zero in the bottom-middle spot. For hand-calculations I
like to keep entries integers as much as possible, so I would do 2 steps here-multiply row 3
by 2, then add (-3) times row 2. This can be done together:
1 1 1 2 1 1 1 2 1 1 1 2
0 2 1 3 0 2 1 3 0 2 1 3
0 3 2 2 R * 0 0 1 5 R * 0 0 1 5
3 2R 3 3R 2 3 R 3
Next step is to multiply (-1) through row 3, then use row 3 to get zeroes in column 3.
1 1 1 2 1 1 1 2
*
1 1 0 5 R 2 R 2 (1) R1 0 2 1 3
1 2 1 4 R * 0 3 2 2
3 R 3 (1) R1
For the next step we want to get a zero in the bottom-middle spot. For hand-calculations I
like to keep entries integers as much as possible, so I would do 2 steps here-multiply row 3
by 2, then add (-3) times row 2. This can be done together:
1 1 1 2 1 1 1 2 1 1 1 2
0 2 1 3 0 2 1 3 0 2 1 3
0 3 2 2 R * 0 0 1 5 R * 0 0 1 5
3 2R 3 3R 2 3 R 3
Next step is to multiply (-1) through row 3, then use row 3 to get zeroes in column 3.
1 1 1 2 R1* R1 R 3 1 1 0 3 1 1 0 3
* * 1
0 2 1 3 R2 R 2 R 3 0 2 0 8 R 2 2 R 2 0 1 0 4
0 0 1 5 0 0 1 5 0 0 1 5
After dividing row two by 2, the last step is to add row 2 to row 1.
1 1 1 2 1 1 1 2
*
1 1 0 5 R 2 R 2 (1) R1 0 2 1 3
1 2 1 4 R * 0 3 2 2
3 R 3 (1) R1
For the next step we want to get a zero in the bottom-middle spot. For hand-calculations I
like to keep entries integers as much as possible, so I would do 2 steps here-multiply row 3
by 2, then add (-3) times row 2. This can be done together:
1 1 1 2 1 1 1 2 1 1 1 2
0 2 1 3 0 2 1 3 0 2 1 3
0 3 2 2 R * 0 0 1 5 R * 0 0 1 5
3 2R 3 3R 2 3 R 3
Next step is to multiply (-1) through row 3, then use row 3 to get zeroes in column 3.
1 1 1 2 R1* R1 R 3 1 1 0 3 1 1 0 3
* * 1
0 2 1 3 R2 R 2 R 3 0 2 0 8 R 2 2 R 2 0 1 0 4
0 0 1 5 0 0 1 5 0 0 1 5
After dividing row two by 2, the last step is to add row 2 to row 1.
1 1 0 3 R1* R1 R 2 1 0 0 1
This matrix is in “Reduced
0 1 0 4 0 1 0 4 Echelon Form” (REF)
Prepared by Vince Zaccone
0 0 1 5 0 0 1 5
For Campus Learning Assistance
Services at UCSB
1 0 0 1 Here is the reduced matrix. Notice that the left part has ones
along the diagonal and zeroes elsewhere. This is the 3x3
0 1 0 4 Identity Matrix. In this case the solution is staring us in the face.
0 0 1 5
The solution represents the common intersection point of the
three planes represented by the equations in the system.
x y z 1 1 1 1 1
2 x 3y z 2 2 3 1 2
x 2y 2 z 4 1 2 2 4
xyz 1 1 1 1 1 1 1 1 1
*
2x 3y z 2 2 3 1 2 R 2 R 2 2R1 0 1 3 0
x 2y 2z 4 1 2 2 4 R *
3 R 3 R1
0 1 3 3
xyz 1 1 1 1 1 1 1 1 1
*
2x 3y z 2 2 3 1 2 R 2 R 2 2R1 0 1 3 0
x 2y 2z 4 1 2 2 4 R *
3 R 3 R1
0 1 3 3
1 1 1 1 1 1 1 1
0 1 3 0 0 1 3 0
0 1 3 3 R * 0 0 0 3
3 R3 R2
xyz 1 1 1 1 1 1 1 1 1
*
2x 3y z 2 2 3 1 2 R 2 R 2 2R1 0 1 3 0
x 2y 2z 4 1 2 2 4 R *
3 R 3 R1
0 1 3 3
1 1 1 1 1 1 1 1
0 1 3 0 0 1 3 0
0 1 3 3 R * 0 0 0 3
3 R3 R2
At this point you might notice a problem. That last row doesn’t make sense. It might help to
write out the equation that the last row represents.
It says 0x+0y+0z=3.
Are there any values of x, y and z that make this equation work? (the answer is NO!)
This system is called INCONSISTENT because we arrive at a contradiction during the
solution procedure. This means that the system has no solution.
No Solution
xyz 1
If the system is inconsistent there will be no solutions. 2x 3y z 2
In this case there will be a contradiction that appears x 2y 2z 4
during the solution process.
1 1 2 1 1 1 2 1 1 1 2 1
*
2 1 1 2 R 2 R 2 2R1 0 3 3 0 0 3 3 0
4 1 5 4 R * 0 3 3 0 R * 0 0 0 0
3 R 3 4R1 3 R3 R2
This time there is a row of zeroes at the bottom. However there is no contradiction here.
The bottom row represents the equation 0x+0y+0z=0. This is always true!
Continue with row reduction by dividing row 2 by -3, then get a zero in row 1.
1 1 2 1 1 1 2 1 1 1 2 1
*
2 1 1 2 R 2 R 2 2R1 0 3 3 0 0 3 3 0
4 1 5 4 R * 0 3 3 0 R * 0 0 0 0
3 R 3 4R1 3 R3 R2
This time there is a row of zeroes at the bottom. However there is no contradiction here.
The bottom row represents the equation 0x+0y+0z=0. This is always true!
Continue with row reduction by dividing row 2 by -3, then get a zero in row 1.
31
Simple iteration
solve the nonlinear set
x-siny=0
y-cosx=0
xi+1= sin yi yi+1= cos xi assum initial value x0=0 y0=0
Simple iteration
x0=0 y0=0
i x=sin y x y=cosx Chart Title
0 0.0000 0 1.0000
1.05
1 0.8415 0.6664
2 0.6181 0.8150 1
3 0.7277 0.7467 0.95
4 0.6792 0.7781 0.9
5 0.7019 0.7636 0.85
6 0.6915 0.7703
0.8
7 0.6963 0.7672
8 0.6941 0.7686 0.75
9 0.6951 0.7680 0.7
10 0.6947 0.7683 0.65
11 0.6949 0.7681
0.6
12 0.6948 0.7682
0.0000 0.2000 0.4000 0.6000 0.8000 1.0000
13 0.6948 0.7682
• Find the root of f(x) = x3 + 5x -5 that is in the interval [0; 1] to 6
decimal places. We perform the iteration
• xn+1 = g(xn) = (5 - x3n)=5
• starting with x(0) = 0:75 .
xn+1 = g(xn) = (5 - x3n)/5
x(0) = 0.75
38
• Theoretically, methods such as Gauss Jordan
elimination will give us exact solutions.
• If the system is too big, we will have problems with
• the round-off errors
• the storage capacity of the computer.
• the computational effort of direct methods is prohibitively
expensive
• How to solve it ?
39
Advantages
• Iteration: repeating a process over and over until an
approximation of the solution is reached.
40
Iterative Solution Methods
• Starts with an initial approximation for the solution
vector (x0)
• At each iteration updates the x vector by using the
sytem Ax=b
• During the iterations A, matrix is not changed so
sparcity is preserved
• Each iteration involves a matrix-vector product
• If A is sparse this product is efficiently done
41
Iterative solution procedure
• Write the system Ax=b in an equivalent form
x=Ex+f (like x=g(x) for fixed-point iteration)
• Starting with x0, generate a sequence of approximations {xk} iteratively by
xk+1=Exk+f
• Representation of E and f depends on the type of the method used
• But for every method E and f are obtained from A and b, but in a different way
42
Convergence
• The convergence of an iterative method can be calculated by
determining the relative percent change of each element in {x}. For
example, for the ith element in the jth iteration,
xij xij1
a,i 100%
xij
For systems that have coefficient matrices with the appropriate structure
– especially large, sparse systems (many coefficients whose value is zero)
– iterative techniques may be preferable
is zero) – iterative techniques may be preferable
4 1 0 0 1 0 0 0 0 0 0 0 0 00 u1 0.08
0
1 4 1 0 0 1 0 0 0 0
0 0 0 00 u 2 0.16
0
0 1 4 1 0 0 1 0 0 0 0 0 0 00 u3 0.36
0
0 0 1 4 0 0 0 1 0 0 0 0 0 00 u 4 1.64
0
1 0 0 0 4 1 0 0 1 0 0 0 0 00 u5 0.16
0
0 1 0 0 1 4 1 0 0 1 0 0 0 00 u6 0.0
0
0 0 1 0 0 1 4 1 0 0 1 0 0 00 u7 0.0
0
0 0 0 1 0 0 1 4 0 0 0 1 0 0 0 0 u8 1.0
0 0 0 0 1 0 0 0 4 1 0 0 1 0 0 0 u9 0.36
0 0 0 0 0 1 0 0 1 4 1 0 0 1 0 0 u10 0
0 0 0 0 0 0 1 0 0 1 4 1 0 0 1 0 u11 0
0 0 0 0 0 0 0 1 0 0 1 4 0 0 0 1 u12 1.0
0 0 0 0 0 0 0 0 1 0 0 0 4 1 0 0 u13 1.64
0 0 0 0 0 0 0 0 0 1 0 0 1 4 1 0 u14 1.0
0 0 0 0 0 0 0 0 0 0 1 0 0 1 4 1 u15 1.0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 4 u16 2.0
44
• Additional advantages of iterative methods include
• (1) programming is simple, and
• (2) it is easily applicable when coefficients are nonlinear.
• Jacobi iterative,
• Gauss-Seidel, and
• Successive-over-relaxation (SOR)
• SOR is a method used to accelerate the convergence
• Gauss-Seidel Iteration is a special case of SOR method
45
Advantages
Iterative methods can be applied to system as
many as 100,000 variables. Examples of these
large systems arise in the solution of partial
differential equations.
46
Advantages
Also the amount of storage, as stated earlier, is far less than
directs methods. In our example, we have :
• 100,000 * 100,000 variables for direct
• 3 * 100,000 – 2 variables for iteration
Applications:
(a) Electric network consisting of resistors
(b) Heat-conduction problems
(c) Particule diffusion
(d) Certain stress-strain problems
(e) Fluid, magnetic, or electric potential
47
Disadvantages
48
Convergence Restrictions
• There are two conditions for the iterative method to converge.
• Necessary that 1 coefficient in each equation is dominate.
• The sufficient condition is that the diagonal is dominate.
n
aii aij
j 1
j i
Can be converted to
x1 ( b1 a 12 x 2 a 13 x 3 a 14 x 4 ) / a 11
x2 ( b2 a 21 x 1 a 23 x 3 a 24 x 4 ) / a 22
x3 ( b3 a 31 x 1 a 32 x 2 a 34 x 4 ) / a 33
x 4 ( b4 a 41 x 1 a 42 x 2 a 43 x 3 ) / a 44
Iterative Methods
• Idea behind iterative methods:
• Convert Ax = b into x = Cx +d
Ax b x Cx d Equivalent system
• Generate a sequence of approximations (iteration) x1,
x2, …., with initial x0
j 1
x Cx
j
d
• Similar to fix-point iteration method
Rearrange Matrix Equations
a11 x1 a12 x 2 a13 x 3 a14 x 4 b1
a 21 x1 a 22 x 2 a 23 x 3 a 24 x 4 b2
a x a x a x a x b
31 1 32 2 33 3 34 4 3
a 41 x1 a 42 x 2 a 43 x 3 a 44 x 4 b4
• Rewrite the matrix equation in the same way
a12 a a b
x1 x 2 13 x 3 14 x 4 1
a11 a11 a11 a11
x a a a b
21 x1 23 x 3 24 x 4 2
2 a 22 a 22 a 22 a 22 n equations
x3 a a a b
31 x1 32 x 2 34 x 4 3
a 33 a 33 a 33 a 33 n variables
a a a b
x4 41 x1 42 x 2 43 x 3 4
a44 a 44 a44 a 44
Iterative Methods
Ax b x j Cx j 1 d ; C ii 0
• x and d are column vectors, and C is a square matrix
a a13 a14 b1
0 12
a11 a11 a11 a
11
a 21 0
a
23
a
24 b2
a 22 a 22 a 22 a 22
C ; d
a a 32 a b
31 0 34 3
a 33 a 33 a 33 a 33
a 41 a a 43 b4
42 0
a 44 a 44 a 44 a 44
Jacobi Method
x 1 new ( b1 a12 x 2
old
a13 x 3
old
a14 x 4
old
) / a11
new old old old
x2 ( b2 a21 x 1 a23 x 3 a24 x 4 ) / a22
new old old old
x3 ( b3 a31 x 1 a32 x 2 a34 x 4 ) / a33
new old old old
x4 ( b4 a41 x 1 a42 x 2 a43 x 3 ) / a44
Gauss-Seidel Method
Differ from Jacobi method by sequential updating:
use new xi immediately as they become available
x 1 new ( b1 a12 x 2
old
a13 x 3
old
a14 x 4
old
) / a11
new new old old
x2 ( b2 a21 x1 a23 x 3 a24 x 4 ) / a22
new new new old
x3 ( b3 a31 x1 a32 x 2 a34 x 4 ) / a33
new new new new
x4 ( b4 a41 x1 a42 x 2 a43 x 3 ) / a44
Gauss-Seidel Method
Differ from Jacobi method by sequential updating:
use new xi immediately as they become available
x 1 new ( b1 a12 x 2
old
a13 x 3
old
a14 x 4
old
) / a11
new new old old
x2 ( b2 a21 x1 a23 x 3 a24 x 4 ) / a22
new new new old
x3 ( b3 a31 x1 a32 x 2 a34 x 4 ) / a33
new new new new
x4 ( b4 a41 x1 a42 x 2 a43 x 3 ) / a44
Gauss-Seidel Method
use new xi at jth iteration as soon as they
become available
x1 j ( b1 a12 x 2
j 1
a13 x 3
j 1
a14 x 4
j 1
) / a11
j new j 1 j 1
x2 ( b2 a21 x1 a23 x 3 a24 x 4 ) / a22
j j j j 1
x3 ( b3 a31 x1 a32 x 2 a34 x 4 ) / a33
j j j j
x4 ( b4 a41 x1 a42 x 2 a43 x 3 ) / a44
x ij x ij 1
a,i j
100% s for all x i
xi
(a) Gauss-Seidel Method (b) Jacobi Method
Convergence and Diagonal Dominant
• Sufficient condition -- A is diagonally dominant
• Diagonally dominant: the magnitude of the diagonal
element is larger than the sum of absolute value of
the other elements in the row n
aii aij
i 1
j i
• Necessary and sufficient condition -- the magnitude of
the largest eigenvalue of C (not A) is less than 1
• Fortunately, many engineering problems of practical
importance satisfy this requirement
• Use partial pivoting to rearrange equations!
Diagonally Dominant Matrix
Diagonal
elements
8 1 2 3 is strictly diagonally dominant
1 10 2
5 because diagonal elements are
A
1 6 12 3 greater than sum of absolute
3 2 3 9 value of other elements in row
8 1 2 3
1 6 2 5
A
is not diagonally dominant
1 6 12 3
3 2 3 9
Jacobi and Gauss-Seidel
5 x1 2 x 2 2 x 3 10
Example: 2 x 1 4 x 2 x 3 7
3 x x 6 x 12
1 2 3
Jacobi Gauss-Seidel
new
new 2 old 2 old 10
x
2 old 2 old 10
x2 x3 x
1 x2 x3
1
5 5 5 5 5 5
new 2 old 1 7 new 2 new 1 old 7
x2 x1 x 3old x
2 x 1 x3
4 4 4 4 4 4
new 3 old 1 old 12 new 3 new 1 new 12
x
3 x 1 x 2 x
3 x 1 x 2
6 6 6 6 6 6
Example
5 x1 12 x 3 80 5 0 12
4 x1 x2 x 3 2 4 1 1
6 x1 8 x 2 45 6 8 0
Not diagonally dominant !!
Order of the equation can be important
Rearrange the equations to ensure convergence
4 x1 x2 x3 2 4 1 1
6 x1 8 x2 45 6 8 0
5 x1 12 x 3 80 5 0 12
Convergence – Gauss-Seidel
Equation 1
y=…
Equation 2
x=…
y=…
x=…
Initial guess
Gauss-Seidel Iteration
x1 ( x 2 x 3 2 ) / 4
Rearrange x 2 ( 45 6 x 1 ) / 8
x ( 80 5 x ) / 12
3 1
Assume x1 = x2 = x3 = 0
x1 ( 0 0 2 ) / 4 0.5
x 2 45 6 ( 0.5 ) / 8 6.0
First
iteration
x 80 5 ( 0.5 ) / 12 6.4583
3
Gauss-Seidel Method
x 1 ( 2 6 6.4583 ) / 4 2.6146
Second iteration x 2 ( 45 6 ( 2.6146 )) / 8 3.6641
x ( 80 5( 2.6146 )) / 12 7.7561
3
x 1 ( 2 3.6641 7.7561 ) / 4 2.3550
Third iteration x 2 ( 45 6 ( 2.3550 )) / 8 3.8587
x ( 80 5( 2.3550 )) / 12 7.6479
3
x 1 ( 2 3.8587 7.6479 ) / 4 2.3767
Fourth iteration x 2 ( 45 6 ( 2.3767 )) / 8 3.8425
x ( 80 5 ( 2.3767 )) / 12 7.6569
3
5 th : x 1 2.3749 , x 2 3.8439 , x 3 7.6562
6 th : x 1 2.3750 , x 2 3.8437 , x 3 7.6563
7 th : x 1 2.3750 , x 2 3.8438 , x 3 7.6562
The method is ended when all elements have converged to a set tolerance.
x ij x ij 1
a,i j
100% s for all x i
xi
5 th : x 1 2.3749 , x 2 3.8439 , x 3 7.6562
6 th : x 1 2.3750 , x 2 3.8437 , x 3 7.6563
7 th : x 1 2.3750 , x 2 3.8438 , x 3 7.6562
x1 x2 x3 x1 x2 x3
ea(%) ea(%) ea(%)
2.375 3.8437 7.6563
2.375 3.8438 7.6562 0 0.00260166 0.001306114
ea 0.000000 0.000026 0.000013
es=0.00001
1
x n1
5
2 1y n 3z n
5 x n1 1y n 3 z n 2
1
3 x n 6 y n1 2 z n 3 y 3 3xn 2 z n
n 1
6
2 x n 3 y n 6 z n1 1 1
z 1 2 x n 3 y n
n 1
6
Couple of Iterations
1 2
x 2 1 y 3z
1 0 0
1st iteration: 5 5
1 1
y1 3 3 x 0 2 z 0
6 2
1 1
z 1 2 x 3 y
1 0 0
6 6
1 1 1 1 1
2nd iteration: x2
5
2 1 y 1
3 z 1
5
2 1.
2
3.
6 5
1 1 2 1 11
y2
6
3 3 x1 2 z1 3 3. 2.
6 5 6 45
1 1 2 1 13
z2
6
1 2 x1
3 y 1
6
1 2.
5
3.
2 60
Convergence - Jacobi
• Consider the convergence graphically for a 2D system:
Equation 1
y=… Equation 2
x=…
y=…
x=…
Initial guess
Convergence - Jacobi
• What if we swap the order of the equations?
Equation 2
Equation 1
x=…
x new
i x new
i (1 ) x old
i
• Successive Over-relaxation (SOR)
Successive Over Relaxation (SOR)
• Relaxation method
G-S method x2 new (b2 a21 x1new a23 x3old a24 x4 old ) / a22
SOR method x2 new (1 ) x2old x2new
(1 ) x2old (b2 a21 x1new a23 x3old a24 x4 old ) / a22
x 1 new (1 )x 1
old
(b1 a12 x 2
old
a13 x 3
old
a14 x 4
old
)/a11
new
(1 )x 2 (b2 a21 x 1
old new old old
x2 a23 x 3 a24 x 4 )/a22
new old new new old
x3 (1 )x 3 (b3 a31 x 1 a32 x 2 a34 x 4 )/a33
new
(1 )x 4 (b4 a41 x 1
old new new new
x4 a42 x 2 a43 x 3 )/a44
Convergence - SOR
• Successive Over-Relaxation (SOR) just adds an extrapolation step.
• w = 1.3 implies Equation 1
go an extra
30%
Extrapolation
x=… y=…
Equation 2
Extrapolation
y=…
x=…
x=… Equation 2
y=…
x=…
Relaxation factor
w (= )
SOR with = 1.2
» [A,b]=Example » w=1.2; x = SOR(A, b, x0, w, tol, 100);
i x1 x2 x3 ....
A = 1.0000 -0.6000 7.2900 7.7000
4 -1 -1 2.0000 4.0170 1.6767 8.4685
6 8 0 3.0000 1.6402 4.9385 7.1264
-5 0 12 4.0000 2.6914 3.3400 7.9204
5.0000 2.2398 4.0661 7.5358
b = 6.0000 2.4326 3.7474 7.7091
-2 7.0000 2.3504 3.8851 7.6334
45 8.0000 2.3855 3.8261 7.6661
80 9.0000 2.3705 3.8513 7.6521
10.0000 2.3769 3.8405 7.6580
» x0=[0 0 0]' 11.0000 2.3742 3.8451 7.6555
x0 = 12.0000 2.3753 3.8432 7.6566
0 13.0000 2.3749 3.8440 7.6561
0 14.0000 2.3751 3.8436 7.6563
0 15.0000 2.3750 3.8438 7.6562
» tol=1.e-6 16.0000 2.3750 3.8437 7.6563
tol = 17.0000 2.3750 3.8438 7.6562
1.0000e-006 18.0000 2.3750 3.8437 7.6563
19.0000 2.3750 3.8438 7.6562
20.0000 2.3750 3.8437 7.6563
Converge Slower !! 21.0000 2.3750 3.8438 7.6562
SOR method converged
SOR with = 1.5
» [A,b]=Example2 » w = 1.5; x = SOR(A, b, x0, w, tol, 100);
i x1 x2 x3 ....
A = 1.0000 -0.7500 9.2812 9.5312
4 -1 -1 2.0000 6.6797 -3.7178 9.4092
6 8 0 3.0000 -1.9556 12.4964 4.0732
-5 0 12 4.0000 6.4414 -5.0572 11.9893
b = 5.0000 -1.3712 12.5087 3.1484
-2 6.0000 5.8070 -4.3497 12.0552
45 7.0000 -0.7639 11.4718 3.4949
80 8.0000 5.2445 -3.1985 11.5303
» x0=[0 0 0]' 9.0000 -0.2478 10.3155 4.0800
10.0000 4.7722 -2.0890 10.9426
x0 = 11.0000 0.1840 9.2750 4.6437
0 12.0000 4.3775 -1.1246 10.4141
0
0 13.0000 0.5448 8.3869 5.1335
14.0000 4.0477 -0.3097 9.9631
» tol=1.e-6 15.0000 0.8462 7.6404 5.5473
………
tol = 20.0000 3.3500 1.4220 9.0016
1.0000e-006 ………
30.0000 2.7716 2.8587 8.2035
………
50.0000 2.4406 3.6808 7.7468
Diverged !! ………
100.0000 2.3757 3.8419 7.6573
SOR method did not converge
Diagonally Dominant
• What does diagonally dominant mean for a 2D system?
• 10x+y=12 => high-slope (more vertical)
• x+10y=21 => low-slope (more horizontal)
• Identity matrix (or any diagonal matrix) would have the intersection
of a vertical and a horizontal line. The b vector controls the location of
the lines.
Homework No. 3
xm =0.5(xl+xu)
If f(xm) ? = 0
no yes
end
MATLAB Problem 2
Bisection Example : f ( x ) x 2 2 x 3 0
Method initial estimeates x l , x u 2.0 , 3.2
iter xl xu xr f ( xr ) x
1 2.0 3.2 2.6 1.44 1.2
2 2.6 3.2 2.9 0.39 0.6
3 2.9 3.2 3.05 0.2025 0.3
4 2.9 3.05 2.975 0.0994 0.15
5 2.975 3.05 3.0125 0.0502 0.075
6 2.975 3.0125 2.99375 0.02496 0.0375
f(2) = 3, f(3.2) = 0.84
Convergence or Divergence of Jacobi and
Gauss-Seidel Methods