0% found this document useful (0 votes)
88 views20 pages

CE3330 Dr. Tarun Naskar

The document discusses various methods for solving linear and nonlinear equations that arise in structural analysis problems: 1. It provides an example of analyzing a rigid frame structure and obtaining its stiffness matrix and displacement vector through an exact analytical solution. 2. It then summarizes different methods for solving systems of linear equations, including Cramer's rule, Gaussian elimination, Gauss-Jordan elimination, LU decomposition, and iterative methods. 3. For nonlinear equations, it briefly outlines the bisection method as one example of a bracketing root-finding technique.

Uploaded by

sayan mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views20 pages

CE3330 Dr. Tarun Naskar

The document discusses various methods for solving linear and nonlinear equations that arise in structural analysis problems: 1. It provides an example of analyzing a rigid frame structure and obtaining its stiffness matrix and displacement vector through an exact analytical solution. 2. It then summarizes different methods for solving systems of linear equations, including Cramer's rule, Gaussian elimination, Gauss-Jordan elimination, LU decomposition, and iterative methods. 3. For nonlinear equations, it briefly outlines the bisection method as one example of a bracketing root-finding technique.

Uploaded by

sayan mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Chapter 2:

Different methods to solve linear and nonlinear equations:

Analysis of the rigid frame by direct stiffness method

E = 200 GPa ; I = 1.33x10-4

Where E = Young’s modulus

I = Second moment of inertia

EI = Flexural rigidity

EA = Axial rigidity; A = 0.01m2

48 kN
Example: Analyze the rigid frame
2m 2m
10kN
• Each member will have two

Nodes.
4m
• Each node will have two translation
and one rotational degree of freedom.
• Total 6 degrees of freedom (DOF)
per member.

{P} = [K] {u}

Because of constraint DOF, we will ended up with only 6 unknown variables


instead of 18 (6x3).
Exact solution by analytical method :

10
−24
−24
{P} =
0
−24
{ 24 }

500.5 0 1.0 −500 0 0


0 500.5 1.0 0 −0.5 1.0
1 1 5.33 0 −1.0 1.33
𝐾 = ∗ 103
−500 0 0 500.5 0 1
0 0.5 −1 0 500.5 −1
[ 0 1 1.33 1 −1 5.33]

1.43 ∗ 10−2
−3.84 ∗ 10−5
−3
{u} = −8.14 ∗ 10−2
1.43 ∗ 10
−5.65 ∗ 10−5
{ 3.85 ∗ 10−3 }
Solving Linear Equations:

References :

1. E. Kreyszig: Advanced Engineering Mathematics (10th Edition); Wiley.


2. D.G. Zill: Advanced Engineering Mathematics (6th Edition); Jones & Bartlett Learning.

1. Cramer’s rule (𝒏! operations):


Cramer's rule is an explicit formula for the solution of a system of linear equations with
as many equations as unknowns (nxn) :
a11 x1 + a12 x2 + ----- + a1n xn = b1
a21 x2 + a22 x2 + ----- + a2n xn = b2
‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

‘ ‘ ‘ ‘

an1 x1 + an2 x2 + ----- + ann xn = bn

𝑘 𝑡ℎ 𝑟𝑜𝑤

𝑎11 𝑎12 … 𝑏1 … 𝑎1𝑛


𝑎 𝑎22 … 𝑏2 … 𝑎2𝑛
| 21 |
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
𝑋𝑘 =
𝑎𝑛1 … … 𝑏𝑛 … 𝑎𝑛𝑛 Det A ≠ 0
𝑎11 … … 𝑎1𝑛
⋮ ⋮ ⋮ ⋮
| ⋮ ⋮ ⋮ ⋮ |
𝑎𝑛1 … … 𝑎𝑛𝑛

Limitation: Computationally expensive for systems of more than two or three equations

Gauss Elimination method:

Gaussian elimination (also known as row reduction) is an algorithm for solving systems of linear
equations. It is a sequence of operations performed on the corresponding matrix of coefficients
until we obtain an row equibalent augmented matrix that is in row-echelon form.
4x1 + 3x2 – x3 = -5.7 E1
5x1 + x2 + 3x3 = 2.8 E2
x1 – x2 + 2x3 = 3.8 E3

To perform row reduction on a matrix, one uses a sequence of elementary row operations to
modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as
possible. There are three types of elementary row operations:

▪ Swapping two rows,


Ei ↔ Ej
▪ Multiplying a row by a nonzero number,
c*Ei Ei
▪ Adding a multiple of one row to another row.

Ei + 2Ej Ej

Gauss Jordon Elimination:

After obtaining row echelon form , the row operations are continued until we obtain an augmented
matrix that is in reduced row-echelon form.

i. First non zero entry in non zero rows is 1


ii. Consecutive non zero rows, the first entry is 1.( In the lower row appears to the right of the
1 in the higher row)
iii. Row consisting of all zeros are at the bottom of the matrix
iv. A column containing a first entery 1 has zeros every where else.
v.
Example :

4 3 −1 −5.7
A = [5 1 3 2.8 ]
1 −1 2 3.8

5
1) E2 - E1 E2
4
𝑎31
2) E3 - E1 E3
𝑎11
𝑎32
3) E3 - E2 E3
𝑎22
4 3 −1 −5.7
[0 −2.75 4.25 9.925 ]
0 0 −0.4545 −1.0909

Question : It is in row echelon form or reduced row echelon form ??

4x1+3x2-x3 = -5.7

-2.75x2 + 4.25x3 = 9.925

-0.4545x3 = - 1.0909

X3 = 2.4

X2 = 0.1

X1 = -0.9

Operations Count:

For a set of n linear equations:

No of divisions = ∑n

n(n+1)
=
2

n(n+1)(n−1)
multliplication for forward elimination = ∑n(n-1) =
3

n(n−1)
multliplication in backward substitution = ∑(n-1) =
2
n(𝑛2 +3n−1)
Total =
3

(𝑛 3 )
If n is large
3

Gauss Jordan elimination involves roughly 50% More arithmetic


(2𝑛3 ) (𝑛 3 )
operations ≈ ≈
3 2

(𝑛 3 )
LU decomposition =
3

(𝑛 3 )
Cholesky factorization =
6

LU decomposition :
L U decomposition of a matrix is the factorization of a given square matrix (A) into two
triangular matrices, one upper triangular matrix (U) and one lower triangular matrix (L), such
that the product of these two matrices gives the original matrix

𝑙11 0 0 0 0 𝑢11 𝑢12 𝑢13 𝑢14 𝑢15


𝑙21 𝑙22 0 0 0 0 𝑢22 𝑢23 𝑢24 𝑢25
𝑙31 𝑙32 𝑙33 0 0 0 0 𝑢33 𝑢34 𝑢35
𝑙41 𝑙42 𝑙43 𝑙44 0 0 0 0 𝑢44 𝑢45
[𝑙𝑛1 − − − 𝑙𝑛𝑛] [ 0 0 0 0 𝑙𝑛𝑛 ]

A is written as a product of = LU

Ax=b

Ux=Y

Ly=b
Basically, the L U decomposition method comes handy whenever it is possible to model the
problem to be solved into matrix form. Conversion to the matrix form and solving with triangular
matrices makes it easy to do calculations in the process of finding the solution. It has an added
advantage if L is a lower triangular matrix with diagonal elements being equal to 1.

How to find an LU factorization?

𝑎11 𝑎12 𝑎13 1 0 0 𝑢11 𝑢12 𝑢13


[𝑎21 𝑎22 𝑎23 ] = [𝑙21 1 0] [ 0 𝑢22 𝑢23 ]
𝑎31 𝑎32 𝑎33 𝑙31 𝑙32 1 0 0 𝑢33

𝑎11 𝑎12 𝑎13


[𝑎21 𝑎22 𝑎23 ] =
𝑎31 𝑎32 𝑎33
𝑢11 𝑢12 𝑢13
[𝑙21 𝑢11 𝑙21 𝑢12 + 𝑢22 𝑙21 𝑢13 + 𝑢23 ]
𝑙31 𝑢11 𝑙31 𝑢12 + 𝑙32 𝑢22 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑢33

𝑢11 = 𝑎11 𝑢12 = 𝑎12 𝑢13 = 𝑎13


𝑎21 = 𝑙21 𝑢11 𝑎22 = 𝑙21 𝑢12 + 𝑢22 𝑎23 = 𝑙21 𝑢13 + 𝑢23
𝑎31 = 𝑙31 𝑢11 𝑎32 = 𝑙31 𝑢12 + 𝑙32 𝑢22 𝑎33 = 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑢33

for, i =1 & for all j

u1j = a1j

end

For j=1 & for all i


𝑢𝑖1
li1 =
𝑢11
end

for, i = 2 ---- n-1

for j=i,

𝑢𝑖,𝑖 = 𝑎𝑖,𝑖 − ∑𝑖−1


𝑘=1 𝑙𝑖𝑘 𝑢𝑘𝑖

end

for j≠i,

𝑢𝑖,𝑗 = 𝑎𝑖,𝑗 − ∑𝑖−1


𝑘=1 𝑙𝑖𝑘 𝑢𝑘𝑗

1
𝑙𝑗,𝑖 = [𝑎𝑗,𝑖 − ∑𝑖−1
𝑘=1 𝑙𝑗𝑘 𝑢𝑘𝑖 ]
𝑢𝑖,𝑗

end

end

for, i =j=n

𝑢𝑛,𝑛 = 𝑎𝑛,𝑛 − ∑𝑛−1


𝑘=1 𝑙𝑛𝑘 𝑢𝑘𝑛

end
Iterative Methods for Solving System of Linear Equations:

References :

1. S.C. Chapra & R.P. Canale: Numerical Methods for Engineers (5th Edition); Tata McGRAW
Hill.

48 kN
2m 2m
10kN

n number of members,
4m

(n×3+3)

= 3(n+1) × 3(n+1) matrix

So, In most of the cases, the global matrix will be highly sparse. In this
kind of situations, the iterative methods will have an advantage.

Bracketing Methods :

There are several methods in this category, but most focuses of them are

1)The Bisection Method :

The bisection method is a root-finding method that applies to any


continuous functions f(x) for which one knows two values with opposite
signs.
If you choose them wisely

f(xu)f(xl) <0

Step 2

An estimation of root xr is determined by


𝑥𝑢 +𝑥𝑙
xr =
2

Step 3

If f(xl)f(xr) <0 , then xu= xr

or, If f(xl)f(xr) >0 , then xl= xr

Special case: If f(xl)f(xr) =0 , then terminate the computation

𝑥𝑟𝑛𝑒𝑤 − 𝑥𝑟𝑜𝑙𝑑
Relative Error = | |×100%
𝑥𝑟𝑛𝑒𝑤

It is a very simple and robust method, but it is also relatively slow.


2)False position method/ Regula falsi method(latin)

The bisection method is a perfectly valid technique for determining the


roots of any function f(x) but it is a brute force approach and The
function f(x) does not have any role in finding the root.

For example
𝑓(𝑥𝑙 ) 𝑓(𝑥𝑢 )
=
𝑥𝑟 −𝑥𝑙 𝑥𝑟 −𝑥𝑢

𝑓(𝑥𝑢 )(𝑥𝑙 −𝑥𝑢 )


xr = xu -
𝑓(𝑥𝑙 ) −𝑓(𝑥𝑢 )

𝑥𝑢 𝑓(𝑥𝑙 )− 𝑥𝑙 𝑓(𝑥𝑢 )
xr =
𝑓(𝑥𝑙 ) −𝑓(𝑥𝑢 )

because it replaces a curve with a straight line and gives a false position
of original root (xr) thus the name.
Problem with false position method:

evenif the the relative error (𝜀𝑎 )< TOL (tolerence)

However, it's not the answer so always put back f(xr) ≈ 0 to check.

Now what will you do if it converges slowly ??

A: f(l).f(r) < 0 (consecutively)

Then, put fu = fu/2

f(l).f(r) > 0

fl = fl/2

(Modified falsi method)

Open Method:

No bracketing which may be difficult

For Linear Systems:


u𝑚𝑚 . = 𝑎𝑚𝑚 − ∑𝑛𝑘=1 l𝑛𝑘 U𝑘𝑛
Part 2

Iterative methods for solving linear systems

Ax=b

X=Tx+C

a11x1+a12x2+a13x3+ ----- a1nxn = b1

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ =⋮

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ =⋮

an1x1+an2x2+an3x3+ ----- annxn = bn

We can write,

x1 = b1/ a11 – 1/ a11[a11x1+a12x2+a13x3+ ----- +a1nxn]

x2 = b2/ a22 – 1/ a22[a21x1+a22x2+a23x3+ ----- +a2nxn]

xn = bn/ ann – 1/ ann[an1x1+an2x2+an3x3+ ----- +an-1xn]

𝑥1 𝑥1 𝑏1/𝑎11
𝑥2 𝑥2 𝑏2/𝑎22
{ } = [𝑇] { } + ( )
⋮ ⋮ ⋮
𝑥𝑛 𝑥𝑛 𝑏𝑛/𝑎𝑛𝑛
Jacobi Iterative method:

Xk = Tx(k-1) + C

𝑥1 𝑘 𝑥1 𝑘−1 𝑏1/𝑎11
𝑥2 𝑥2 𝑏2/𝑎22
{ } = [𝑇] { } +( )
⋮ ⋮ ⋮
𝑥𝑛 𝑥𝑛 𝑏𝑛/𝑎𝑛𝑛

Jacobi Iterative / iteration method:

Xi(k) = 1/aii [-∑𝑛𝑗=1(𝑎𝑖𝑗 𝑥𝑗k-1 ) + bi ]


𝑗≉𝑖

Gauss – Seidel:

Xi(k) = 1/aii [-∑𝑖−1 𝑛


𝑗=1(𝑎𝑖𝑗 𝑥𝑗 ) - ∑𝑗=𝑖+1(𝑎𝑖𝑗 𝑥𝑗 ) + bi]
k k-1

Both the Jacobi and Gauss-Seidel methods are iterative methods for solving the linear system Ax
= b. In the Jacobi method the updated vector x is used for the computations only after all the
variables (i.e. all components of the vector x) have been updated. On the other hand in the
Gauss-Seidel method, the updated variables are used in the computations as soon as they are
updated.

Successive Over Relaxation (SOR) method:

Gauss- Seidel method Xi(k) = 1/aii [-∑𝑖−1 𝑛


𝑗=1(𝑎𝑖𝑗 𝑥𝑗 ) - ∑𝑗=𝑖+1(𝑎𝑖𝑗 𝑥𝑗 ) +
k k-1

bi] is modified as :

Xi(k) = (1- β)xi(k-1) + βxik


• 0< β<1 = under relaxation method
• β>1 over relaxation method (Typical β value is 1.3 to 1.9)

Different Iteration stopping criteria:

|| X(N) - X(N-1) || < TOL (very low tolerance value like 1×10-12
|| XN ||

|| X(N) - X(N-1) || < TOL


𝑥𝑗 −𝑥𝑗 𝑁
max j | | < TOL
𝑥𝑗𝑁+1

|| b - AxN || < TOL

|| b - AxN || < ∑ || b ||

P – norm for n+1 vector X

|| X ||p = [ ∑𝑛𝑖=1 |𝑥𝑖 |𝑝 ]1/p

|| X ||α = max | 𝑥 𝑖 𝑁 |
1≤𝑖≤𝑛

L2 – norm =|| X ||2 = [ ∑𝑛𝑖=1 |𝑥𝑖 |2 ]1/2

L1 norm = ∑ | xi |
Iterative solutions for non linear equation:

Newton Raphson method:


It is a method for finding successively better approximations to the roots (or zeroes) of a real-
valued function.

f(xi+1) = f(xi) + f '(xi)(xi+1 – xi)

Root of a function at point (i+1) implies


f(xi+1) =0. As it is a first order
approximation ,

𝑓(𝑥𝑖 ) − 0
𝑓 ′ (x 𝑖 ) =
𝑥𝑖 − 𝑥𝑖+1
𝑓(𝑥𝑖 )
xi+1 =- + xi
𝑓′(x𝑖 )

for system of non linear equations

f(x,y) = U

g(x,y) = V
∂u𝑖 ∂u𝑖
Ui+1 = Ui + (xi+1 - xi) + (yi+1 - yi)
∂x ∂y

∂v𝑖 ∂v𝑖
Vi+1 = Vi + (xi+1 - xi) + (yi+1 - yi)
∂x ∂y

Root of a function at point (i+1) implies f(xi+1, yi+1) =0; So Ui+1 = 0 and
for the same reason Vi+1 = 0
∂u𝑖 ∂u𝑖
−𝑈𝑖 ∂x ∂y 𝑥𝑖+1− 𝑥𝑖
( ) = [ ∂v ] (𝑦 )
−𝑉𝑖 𝑖 ∂v𝑖 𝑖+1− 𝑦𝑖
∂x ∂y

-f(Xi) = J (Xi) ( Xi+1-Xi)

where, X = (𝑦𝑥 )
∂u𝑖 ∂u𝑖
∂x ∂y
J= jacobian matrix=[ ∂v ∂v𝑖
] .
𝑖
∂x ∂y

𝑓 (Xi )
Xi+1 = Xi -
J(Xi )

Pitfalls:

- Multiple roots
- Slowly converging function/diverging

Diverging Slowly Converging


Secant Method:

A potential problem in implementing the Newton-Raphson method is


the evaluation of the derivative
𝑓(𝑥𝑖−1 )−𝑓(𝑥𝑖 )
f '(xi) ≈ (remember backward FD from chapter 3)
𝑥𝑖−1 −𝑥𝑖

Now using Newton- Raphson equation


𝑓(𝑥𝑖 )
xi+1 =- + xi
𝑓′(x𝑖 )

we can derive,
𝑓(𝑥𝑖 )(𝑥𝑖−1 −𝑥𝑖 )
xi+1 = xi -
𝑓(𝑥𝑖−1 )−𝑓(𝑥𝑖 )

So we calculated the next iteration value but with out calculating the
derivative.

Modified Secant Method:

𝑓(𝑥𝑖 + δ𝑥 )−𝑓(𝑥𝑖 )
f '(xi) ≈
δ𝑥

You might also like