AMC_Unit 2 Linear Algebra
AMC_Unit 2 Linear Algebra
Computation
Unit 3
Linear Algebra
A set of Linear Algebraic Equations
x2 = -2.5
x1 = 3.0
Operation counting
• Forward elimination
• Backward substitution
Operation counting
Pitfall-I
• Pivot element is zero
• Example
• Factor = 1/0.0003
• Division by a very small number
• X1 = (2.0001-3*0.7)/0.0003 = -333
• X1 = (2.0001-3*0.67)/0.0003 = -33
• X1 = (2.0001-3*0.667)/0.0003 = -3.33
Pitfall-III
• Round off error
– Important when dealing with 100 or more equations,
due to error propagation
• Remedy
– Increase significant figures
– This increases computation efforts
Remedy
• Pivoting
– Switching largest element to pivot position
• Complete pivoting
– If columns as well as rows are searched for the
largest element and both are switched.
• Partial pivoting
– If only column below pivot element is searched for
largest element and corresponding row is switched.
• Complete pivoting is rarely used because
switching columns change the order of Xs. Add
complexity in program
• In programming, rows are not switched
• But track of the pivot row number is kept
• Defines order of forward elimination and
backward substitution without changing place
• Moreover, Scaling is required to identify
whether pivoting is necessary
Example
• Without Scaling &Pivoting With scaling & pivoting
−a11 −a21
= a11a22 = a21a12 a11a22 − a21a12 = 0
a12 a22
Pitfalls
• Ill-conditioned equations
– Slope of equations are nearly same
x A
= Cond A
x A
Matrix norm
• A Norm is a real valued function that provides
a measure of the size of multi-component
mathematical entities (matrices)
• Frobenius norm
• Number of flops
• Saves memory
LU Decomposition
LU Decomposition
Forward Substitution
Backward Substitution
Matrix Inverse
[A] [A] -1 = [I]
Matrix Inverse
Complex systems
where
A − B X U
B =
A Y V
• Sparse Matrix
(PDE discretization)
Tri-diagonal Matrix
Tri-diagonal Matrix
• Thomas Algorithm (LU Decomposition)
• Forward substitution
• Backward substitution
Iterative procedure
Parallelization possible
Parallelization difficult
• Diagonally dominant
Successive over-relaxation Method
From Wikipedia
• Jump to navigationJump to searchIn numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of
equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.
• It was devised simultaneously by David M. Young Jr. and by Stanley P. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation
methods had been used before the work of Young and Frankel. An example is the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these
methods were designed for computation by human calculators, requiring some expertise to ensure convergence to the solution which made them inapplicable for programming on digital
computers. These aspects are discussed in the thesis of David M. Young Jr.[1]
• Contents
• 1Formulation
• 2Convergence
– 2.1Convergence Rate
• 3Algorithm
• 4Example
• 5Symmetric successive over-relaxation
• 6Other applications of the method
• 7See also
• 8Notes
• 9References
• 10External links
• Formulation[edit]
• Given a square system of n linear equations with unknown x:
• {\displaystyle A\mathbf {x} =\mathbf {b} }where:
• {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots
&a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots
\\b_{n}\end{bmatrix}}.}Then A can be decomposed into a diagonal component D, and strictly lower and upper triangular components L and U:
• {\displaystyle A=D+L+U,}where
• {\displaystyle D={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}},\quad
L={\begin{bmatrix}0&0&\cdots &0\\a_{21}&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &0\end{bmatrix}},\quad U={\begin{bmatrix}0&a_{12}&\cdots
&a_{1n}\\0&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &0\end{bmatrix}}.}The system of linear equations may be rewritten as:
• {\displaystyle (D+\omega L)\mathbf {x} =\omega \mathbf {b} -[\omega U+(\omega -1)D]\mathbf {x} }for a constant ω > 1, called the relaxation factor.
• The method of successive over-relaxation is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this
may be written as:
• {\displaystyle \mathbf {x} ^{(k+1)}=(D+\omega L)^{-1}{\big (}\omega \mathbf {b} -[\omega U+(\omega -1)D]\mathbf {x} ^{(k)}{\big )}=L_{w}\mathbf {x} ^{(k)}+\mathbf {c}
,}where {\displaystyle \mathbf {x} ^{(k)}} is the kth approximation or iteration of {\displaystyle \mathbf {x} } and {\displaystyle \mathbf {x} ^{(k+1)}} is the next or k + 1 iteration
of {\displaystyle \mathbf {x} }. However, by taking advantage of the triangular form of (D+ωL), the elements of x(k+1) can be computed sequentially using forward substitution:
• {\displaystyle x_{i}^{(k+1)}=(1-\omega )x_{i}^{(k)}+{\frac {\omega }{a_{ii}}}\left(b_{i}-\sum _{j<i}a_{ij}x_{j}^{(k+1)}-\sum _{j>i}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\ldots ,n.}Convergence[edit]
• Spectral radius {\displaystyle \rho (C_{\omega })} of the iteration matrix for the SOR method {\displaystyle C_{\omega }}. The plot shows the dependence on the spectral radius of the
Jacobi iteration matrix {\displaystyle \mu :=\rho (C_{\text{Jac}})}.
• The choice of relaxation factor ω is not necessarily easy, and depends upon the properties of the coefficient matrix. In 1947, Ostrowski proved that if {\displaystyle
A} is symmetric and positive-definite then {\displaystyle \rho (L_{\omega })<1} for {\displaystyle 0<\omega <2}. Thus, convergence of the iteration process follows, but we are generally
interested in faster convergence rather than just convergence.
• Convergence Rate[edit]
• The convergence rate for the SOR method can be analytically derived. One needs to assume the following
• the relaxation parameter is appropriate: {\displaystyle \omega \in (0,2)}
• Jacobi's iteration matrix {\displaystyle C_{\text{Jac}}:=I-D^{-1}A} has only real eigenvalues
• Jacobi's method is convergent: {\displaystyle \mu :=\rho (C_{\text{Jac}})<1}
• a unique solution exists: {\displaystyle \det A\neq 0}.
Reducing flops
• Pre-divide the coefficient of the row with the
diagonal element
Relaxation
• l = 0 to 1 under relaxation (damping)
• l = 1 to 2 over relaxation (moving in right direction)
As obtained by Jacobi or
As obtained after Gauss-Seidel Methods
applying relaxation factor