0% found this document useful (0 votes)
3 views

AMC_Unit 2 Linear Algebra

The document covers various methods for solving linear algebraic equations, including graphical methods, Cramer's rule, Gauss elimination, and LU decomposition. It discusses pitfalls such as division by zero, round-off errors, and ill-conditioned systems, along with remedies like pivoting and scaling. Additionally, it introduces iterative methods like Jacobi and Gauss-Seidel, and highlights the importance of convergence conditions in numerical solutions.

Uploaded by

hariom.2023em01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AMC_Unit 2 Linear Algebra

The document covers various methods for solving linear algebraic equations, including graphical methods, Cramer's rule, Gauss elimination, and LU decomposition. It discusses pitfalls such as division by zero, round-off errors, and ill-conditioned systems, along with remedies like pivoting and scaling. Additionally, it introduces iterative methods like Jacobi and Gauss-Seidel, and highlights the importance of convergence conditions in numerical solutions.

Uploaded by

hariom.2023em01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Applied Mathematics and

Computation
Unit 3
Linear Algebra
A set of Linear Algebraic Equations

• as are constant coefficient


• bs are constants
• Xs are unknowns
• n is number of unknown and number of equations
In Matrix Form
Linear Algebraic Equations
• Laws of conservation
– Mass, momentum and energy
– Multi-component system
– Discretized differential equation
• Equilibrium equation & Compatibility equation
• Kirchhoff's law
Solution methods
• Graphical
• Crammer rule
• Elimination of unknowns
• Gauss elimination
• Gauss Jordon
• LU decomposition
• Gauss Seidel
Graphical method
• Limited to three equations
Graphical method
Graphical Method

(a) Slope same, intercept different => no solution


(b) Slope and intercept, both are same => infinite solutions
(c) Slopes are nearly same => ill-conditioned system (difficult to
detect intersection point)
Cramer’s Rule

• Determination of determinant is time


consuming, as number of equations increases
Elimination of unknown

• Same as Cramer’s Rule


Gauss Elimination

Multiply Eq (1) by a factor and subtract it from Eq (2).


Gauss Elimination
Gauss Elimination

(a) Forward elimination


(b) Backward substitution
Example

Multiply Eq (1) by (0.1/3) and subtract from Eq (2)


Multiply Eq (1) by (0.3/3) and subtract from Eq (3)
Multiply Eq (2) by (-0.19/7.00333) and subtract
from Eq (3)
• Backward substitution

x2 = -2.5
x1 = 3.0
Operation counting
• Forward elimination

• Backward substitution
Operation counting
Pitfall-I
• Pivot element is zero
• Example

• Factor = 4/0, 2/0


• Division by zero
Pitfall-II
• Pivot element is nearly zero
• Tolerance be specified in the program to detect this
condition
• Example-II

• Factor = 1/0.0003
• Division by a very small number
• X1 = (2.0001-3*0.7)/0.0003 = -333
• X1 = (2.0001-3*0.67)/0.0003 = -33
• X1 = (2.0001-3*0.667)/0.0003 = -3.33
Pitfall-III
• Round off error
– Important when dealing with 100 or more equations,
due to error propagation
• Remedy
– Increase significant figures
– This increases computation efforts
Remedy
• Pivoting
– Switching largest element to pivot position
• Complete pivoting
– If columns as well as rows are searched for the
largest element and both are switched.
• Partial pivoting
– If only column below pivot element is searched for
largest element and corresponding row is switched.
• Complete pivoting is rarely used because
switching columns change the order of Xs. Add
complexity in program
• In programming, rows are not switched
• But track of the pivot row number is kept
• Defines order of forward elimination and
backward substitution without changing place
• Moreover, Scaling is required to identify
whether pivoting is necessary
Example
• Without Scaling &Pivoting With scaling & pivoting

• Scaling introduces round off error


• Without scaling and with pivoting

• Scaling should be done to identify the need of


pivoting. Pivoting and Gauss elimination be done
with the original coefficient values.
Pitfall-IV
• Singular systems
– Slope are same
– Graphically, they will either be parallel or superimpose
– No solution or infinite solution
– Computer algorithm should recognize this case
– Determinant is zero

−a11 −a21
=  a11a22 = a21a12  a11a22 − a21a12 = 0
a12 a22
Pitfalls
• Ill-conditioned equations
– Slope of equations are nearly same

– Determinant (denominator) is nearly zero


Determinant determination
• Example 1: well conditioned problem

• Example 2: Ill conditioned problem

• Example 3: Ill conditioned problem

• The value of determinant is not right indicator


of ill conditioned equation.
Determinant determination with scaling
• Example 1: well conditioned problem

• Example 2: Ill conditioned problem

• Example 3: Ill conditioned problem

• The value of determinant after scaling is right


indicator of ill conditioned equation. However,
difficult to determine for more than 3 equations
Another Indicator
• Ill-conditioned equations: matrix
– Scale the matrix of coefficient [A] so that largest
element in each row is 1.
– Invert the matrix.
– If there are elements that are several order of
magnitude greater than one => Ill conditioned
Other indicators
• If multiplication of the inverse with the
original matrix is not close to identity matrix
=> ill condition
• If the inversion of the inverted matrix is not
close to original matrix => ill condition
• However, computationally inefficient methods
Another indicator
• Ill-conditioned equations:
– A small changes in coefficient, results in large
changes in solution
• Example

• A change of <5% in coefficient, brings a change in x1 by 100 %


Pitfalls
• Ill-conditioned equations: When matrix condition
number is greater than unity

x A
= Cond  A
x A
Matrix norm
• A Norm is a real valued function that provides
a measure of the size of multi-component
mathematical entities (matrices)
• Frobenius norm

• Row sum norm


Example
Gauss Jordon
• Divide to make pivot element equal to 1
• Full elimination in all the (n-1) equations, not only
forward elimination
• Results in identity matrix instead of upper triangular
matrix.
• No need of backward substitution
• Multiplication: (n)n+(n-1)n+(n-2)n…=(n(n+1)/2).n=n3/2+n2/2
• i.e., S number of terms in each equation (n-i+1) * number of equations (n)
• Addition: n(n-1)+(n-1)(n-1)+(n-2)(n-1)…=(n.(n+1)/2)(n-1)=n3/2-n/2
• i.e., S number of terms in each equation (n-i+1) * number of equations (n-1)

• Number of flops

• 50% more than Gauss elimination


LU Decomposition
LU Decomposition
LU Decomposition

• Saves memory
LU Decomposition
LU Decomposition
Forward Substitution
Backward Substitution
Matrix Inverse
[A] [A] -1 = [I]
Matrix Inverse
Complex systems
where

 A + i  B   X  + i Y  = U  + i V 


 A X  + i  B  X  + i  AY  −  B Y  = U  + i V 
 A X  −  B Y  + i  B  X  +  AY  = U  + i V 

 A − B   X  U 
B   = 
 A Y  V 

• N complex equations converted to 2n real ones


 3 + 2i 4 + 0i   x1 + y1i   3 + 1i 
 0 − 1i 1 + 0i   x + y i  =  3 + 0i 
  2 2   
 3 4  2 0     x1   y1     3  1    A + i  B   X  + i Y  = U  + i V 
  + i  −1 0     x  + i     = + i  
  0 1      2   y 2  
   3   0
  3 4   x1   2 0   x1   3 4   y1   2 0   y1    3  1  
 +i  +i  +i  −1 0   y   =   3  + i 0  
2
   
  0 1   x2   −1 0   x2   0 1   y2     2       
  3 4   x1   2 0   y1   2 0   x1   3 4   y1     3  1  
    −   + i  −1 0   x  + i =
0 1   y   3  + i  
  0 1   x2   −1 0   y2    2  
  2     0  
 3 4   x1   2 0   y1  3 
 0 1   x  −  −1 0   y  =  3 
  2   2  
 2 0   x1   3 4   y1  1 
 −1 0   x  +  0 1   y  = 0 
  2   2  
 3 4 − 2 0   x1   3 
 0 1   x  3
 1 0  2 
 = 
 
 2 0 3 4   y1  1 
    
 − 1 0 0 1   y2  0 
Nonlinear System of Equations
• Generalised approach of multi-equation (dimension) version of Newton Raphson method

Taylor’s series expansion of kth equation up to first derivative


0

Rearranging ith and (i+1)th variable of either side of equal sign


Arranging all the equations in matrix form

• May use finite difference approximation for


the partial derivatives in Z
• Excellent initial guesses required to ensure
convergence.
Special matrices
• Banded Matrix
– Tridiagonal Matrix
(ODE discretization)

• Sparse Matrix
(PDE discretization)
Tri-diagonal Matrix
Tri-diagonal Matrix
• Thomas Algorithm (LU Decomposition)
• Forward substitution

• Backward substitution
Iterative procedure

• Initial guess may be taken as zero or 1


• Stopping criteria
Jacobi Method

Parallelization possible

x1 = 0 2.616667 3.000762 3.000806 3.000022 2.999999 3

x2 = 0 -2.75714 -2.48852 -2.49974 -2.5 -2.5 -2.5

x3= 0 7.14 7.006357 7.000207 6.999981 6.999999 7


Gauss Seidel Method

Parallelization difficult

x1 = 0 2.616667 2.990557 3.000032 3


x2 = 0 -2.79452 -2.49962 -2.49999 -2.5
x3= 0 7.00561 7.000291 6.999999 7
Convergence Condition

• Diagonally dominant
Successive over-relaxation Method
From Wikipedia
• Jump to navigationJump to searchIn numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of
equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.
• It was devised simultaneously by David M. Young Jr. and by Stanley P. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation
methods had been used before the work of Young and Frankel. An example is the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these
methods were designed for computation by human calculators, requiring some expertise to ensure convergence to the solution which made them inapplicable for programming on digital
computers. These aspects are discussed in the thesis of David M. Young Jr.[1]
• Contents
• 1Formulation
• 2Convergence
– 2.1Convergence Rate
• 3Algorithm
• 4Example
• 5Symmetric successive over-relaxation
• 6Other applications of the method
• 7See also
• 8Notes
• 9References
• 10External links
• Formulation[edit]
• Given a square system of n linear equations with unknown x:
• {\displaystyle A\mathbf {x} =\mathbf {b} }where:
• {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots
&a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots
\\b_{n}\end{bmatrix}}.}Then A can be decomposed into a diagonal component D, and strictly lower and upper triangular components L and U:
• {\displaystyle A=D+L+U,}where
• {\displaystyle D={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}},\quad
L={\begin{bmatrix}0&0&\cdots &0\\a_{21}&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &0\end{bmatrix}},\quad U={\begin{bmatrix}0&a_{12}&\cdots
&a_{1n}\\0&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &0\end{bmatrix}}.}The system of linear equations may be rewritten as:
• {\displaystyle (D+\omega L)\mathbf {x} =\omega \mathbf {b} -[\omega U+(\omega -1)D]\mathbf {x} }for a constant ω > 1, called the relaxation factor.
• The method of successive over-relaxation is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this
may be written as:
• {\displaystyle \mathbf {x} ^{(k+1)}=(D+\omega L)^{-1}{\big (}\omega \mathbf {b} -[\omega U+(\omega -1)D]\mathbf {x} ^{(k)}{\big )}=L_{w}\mathbf {x} ^{(k)}+\mathbf {c}
,}where {\displaystyle \mathbf {x} ^{(k)}} is the kth approximation or iteration of {\displaystyle \mathbf {x} } and {\displaystyle \mathbf {x} ^{(k+1)}} is the next or k + 1 iteration
of {\displaystyle \mathbf {x} }. However, by taking advantage of the triangular form of (D+ωL), the elements of x(k+1) can be computed sequentially using forward substitution:
• {\displaystyle x_{i}^{(k+1)}=(1-\omega )x_{i}^{(k)}+{\frac {\omega }{a_{ii}}}\left(b_{i}-\sum _{j<i}a_{ij}x_{j}^{(k+1)}-\sum _{j>i}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\ldots ,n.}Convergence[edit]
• Spectral radius {\displaystyle \rho (C_{\omega })} of the iteration matrix for the SOR method {\displaystyle C_{\omega }}. The plot shows the dependence on the spectral radius of the
Jacobi iteration matrix {\displaystyle \mu :=\rho (C_{\text{Jac}})}.
• The choice of relaxation factor ω is not necessarily easy, and depends upon the properties of the coefficient matrix. In 1947, Ostrowski proved that if {\displaystyle
A} is symmetric and positive-definite then {\displaystyle \rho (L_{\omega })<1} for {\displaystyle 0<\omega <2}. Thus, convergence of the iteration process follows, but we are generally
interested in faster convergence rather than just convergence.
• Convergence Rate[edit]
• The convergence rate for the SOR method can be analytically derived. One needs to assume the following
• the relaxation parameter is appropriate: {\displaystyle \omega \in (0,2)}
• Jacobi's iteration matrix {\displaystyle C_{\text{Jac}}:=I-D^{-1}A} has only real eigenvalues
• Jacobi's method is convergent: {\displaystyle \mu :=\rho (C_{\text{Jac}})<1}
• a unique solution exists: {\displaystyle \det A\neq 0}.
Reducing flops
• Pre-divide the coefficient of the row with the
diagonal element
Relaxation
• l = 0 to 1 under relaxation (damping)
• l = 1 to 2 over relaxation (moving in right direction)

As obtained by Jacobi or
As obtained after Gauss-Seidel Methods
applying relaxation factor

You might also like