0% found this document useful (0 votes)
12 views125 pages

2. Ch 2 Solutions to Linear Equations

The document discusses methods for solving linear equations, particularly focusing on determinants, cofactors, and Gaussian elimination. It explains the concepts of symmetric and banded matrices, as well as iterative solutions like the Gauss-Seidel method. Additionally, it emphasizes efficient data storage and memory optimization techniques for handling large systems of equations.

Uploaded by

Gokarna Motra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views125 pages

2. Ch 2 Solutions to Linear Equations

The document discusses methods for solving linear equations, particularly focusing on determinants, cofactors, and Gaussian elimination. It explains the concepts of symmetric and banded matrices, as well as iterative solutions like the Gauss-Seidel method. Additionally, it emphasizes efficient data storage and memory optimization techniques for handling large systems of equations.

Uploaded by

Gokarna Motra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

2.

Solution of Linear Equations


Gokarna Bahadur Motra
Finding determinant using inspection
Special case. If two rows or two columns are proportional
(i.e. multiples of each other), then the determinant of the
matrix is zero

2 7 8
3 2 4 0
2 7 8

because rows 1 and 3 are proportional to each other

If the determinant of a matrix is zero, it is called a


singular matrix
What is a cofactor?
Cofactor method
If A is a square matrix
a11 a12 a13 
A  a 21 a 22 a 23 
a 31 a 32 a 33 
The minor, Mij, of entry aij is the determinant of the submatrix
that remains after the ith row and jth column are deleted from A.
The cofactor of entry aij is Cij=(-1)(i+j) Mij

a 21 a 23 a 21 a 23
M 12   a 21a 33  a 23a 31 C12   M 12  
a 31 a 33 a 31 a 33
What is a cofactor?

Sign of cofactor  - 
-  - 
 

 - 
Find the minor and cofactor of a33
2 4 - 3 
A   1 0 4 
 Minor 2 4
 2 - 1 2  M 33   2  0  4  1  4
1 0

Cofactor C  (  1) ( 3  3 ) M  M   4
33 33 33
Cofactor method of obtaining the
determinant of a matrix

The determinant of a n x n matrix A can be computed by


multiplying ALL the entries in ANY row (or column) by
their cofactors and adding the resulting products. That is,
for each 1  i  n and 1  j  n

Cofactor expansion along the jth column

det( A )  a 1j C 1j  a 2j C 2j    a n jC n j

Cofactor expansion along the ith row

det( A )  a i1C i1  a i2C i2    a inC in


Example: evaluate det(A) for:

1 0 2 -3

A= 3 4 0 1
det(A) = a11C11 +a12C12 + a13C13 +a14C14
-1 5 2 -2
0 1 1 3

4 0 1 3 0 1 3 4 1
det(A)=(1) 5 2 -2 - (0) -1 2 -2 +2 -1 5 -2
1 1 3 0 1 3 0 1 3
3 4 0
- (-3) -1 5 2 = (1)(35)-0+(2)(62)-(-3)(13)=198
0 1 1
Example : evaluate
1 5 -3
det(A)= 1 0 2
3 -1 2
By a cofactor along the third column

det(A)=a13C13 +a23C23+a33C33

4 1 0 1 5 1 5
det(A)= -3* (-1) +2*(-1)5 +2*(-1)6
3 -1 3 -1 1 0

= det(A)= -3(-1-0)+2(-1)5(-1-15)+2(0-5)=25
Quadratic form

The scalar Ud kd


T d  vector
k  square matrix
Is known as a quadratic form

If U>0: Matrix k is known as positive definite


If U≥0: Matrix k is known as positive semidefinite
Quadratic form

 d1   k11 k12 
Let d  k  
Symmetric
d 2   k 21 k 22  matrix

Then
 k11 k12   d1 
U  d k d  d1 d 2 
T
  
k12 k 22  d 2 
 k11d1  k12 d 2 
 d1 d 2  
k12 d1  k 22 d 2 
 d1 (k11d1  k12 d 2 )  d 2 (k12 d1  k 22 d 2 )
 k11d1  2k12 d1 d 2  k 22 d 2
2 2
Differentiation of quadratic form

Differentiate U wrt d1
U
 2 k11 d 1  2 k12 d 2
d 1
Differentiate U wrt d2
U
 2 k12 d 1  2 k 22 d 2
d 2
Differentiation of quadratic form

Hence

 U 
U  d 1   k11 k12   d 1 
   2   
 d  U  k12 k 22  d 2 
 d 2 
2k d
Solution of Equations

1. Gauss Elimination
The equations of the form solved by successively eliminating unknowns.

These operations can be written in matrix form as:

Finally back substituting, we get:


General Algorithm for Gauss Elimination
For computer implementation the method can be expressed in more general form as:
The idea at step 1 is to use equation 1 (the first row) in eliminating 𝑥1 from remaining equations. We denote the
step number as a superscript set in parentheses. The reduction process at step 1 is

ai1/a11 are simply the row multipliers that were referred to in the example discussed previously. Also, a11 is
referred to as a pivot. The reduction is carried out for all the elements in the shaded area in above for which i
and j range from 2 to n . The elements in rows 2 to n of the first column are zeroes since x1 is eliminated. In
the computer implementation, we need not set them to zero, but they are zeroes for our consideration. At the
start of step 2, we thus have
The elements in the shaded area in above equation are reduced at step 2. We now show the start
of step k and the operations at step k in

At step k , elements in the shaded area are reduced. The general reduction scheme with limits on indices
may be put as follows: In step k ,
After (n – 1) steps, we get

The superscripts are for the convenience of presentation. In the computer implementation, these superscripts
can be avoided. We now drop the superscripts for convenience, and the back-substitution process is given by
𝑥n =bn / ann, and then

The algorithm discussed above is given below in the form of computer logic.

Algorithm 1: General Matrix


Back-substitution
Forward elimination (reduction of A, b)

b contains the solution to Ax = b


Symmetric Matrix
If A is symmetric, then the previous algorithm needs two modifications. One is that the multiplier is
defined as

The other modification is related to the DO LOOP index (the third DO LOOP in the previous algorithm):
DO j = i, n
2.2 Banded, Symmetric Banded Matrices; Data Storage and Memory Optimization
In a banded matrix, all of the nonzero elements are contained within a band; outside of the band all
elements are zero. The stiffness matrix that we will come across in subsequent chapters is a symmetric
and banded matrix. Consider an (n x n) symmetric banded matrix:

In the above banded matrix,


nbw is called the half-
bandwidth . Since only the
nonzero elements need to
be stored, the elements of this
matrix are compactly stored in
the (n x nbw) matrix as
follows:
The principal diagonal or 1st diagonal of first matrix is the first column of second matrix . In general, the pth
diagonal of first is stored as the pth column of second as shown. The correspondence between the elements of
first and second is given by
Also, we note that aij = aji in first, and that the number of
elements in the kth row of second is min (n - k + 1, nbw)

First Second
Algorithm 2: Symmetric Banded Matrix

Back-substitution
Forward elimination
Solution with Multiple Right Sides
Often, we need to solve Ax = b with the same A, but with different bs. This happens in the finite element method
when we wish to analyze the same structure for different loading conditions. In this situation, it is computationally
more economical to separate the calculations associated with A from those associated with b . The reason
for this is that the number of operations in reduction of an (n x n) matrix A to its triangular form is proportional to
n3 , while the number of operations for reduction of b and back-substitution is proportional only to n2 . For large n,
this difference is significant.
The previous algorithm for a symmetric banded matrix is modified accordingly as follows:

Algorithm 3: Symmetric Banded, Multiple Right Sides

Forward elimination for A Forward elimination of each b

Back-substitution: This algorithm is the same as in Algorithm 2.


Gaussian Elimination with Column Reduction
A careful observation of the Gaussian elimination process shows us a way to reduce the coefficients column
after column. This process leads to the simplest procedure for skyline solution , which we present later. We
consider the column-reduction procedure for symmetric matrices. Let the coefficients in the upper triangular
matrix and the vector b be stored.
We can understand the motivation behind the column approach by referring back, which is

focus our attention on, say, column 3 of


the reduced matrix. The first element in
this column is unmodified, the second
element is modified once, and the third
element is modified twice. Using the fact
that aij = aji since A is assumed to be
symmetric, we have

From these equations, we make the critical


observation that the reduction of column 3 can be
done using only the elements in columns 1 and 2
and the already reduced elements in column 3.
This idea whereby column 3 is obtained using
only elements in previous columns that have
already been reduced is shown schematically:
The reduction of other columns is similarly completed. For example, the reduction of column 4 can be
done in three steps as shown schematically in

We now discuss the reduction of column j , 2 … j … n, assuming that columns to the left of j have been fully
reduced. The coefficients can be represented in the following form:
The reduction of column j requires only elements from columns to the left of j and appropriately reduced elements
from column j. We note that for column j, the number of steps needed is j - 1. Also, since a11 is not reduced, we
need to reduce columns 2 to n only. The logic is now given as

The reduction of the right side b can


be considered as the reduction of
one more column. Thus, we have

We can see that if there are a set of zeroes at the top of a column, the operations need to be carried out only on
the elements ranging from the first nonzero element to the diagonal. This leads naturally to the skyline solution.

Skyline Solution: If there are zeroes at the top of a column, only the elements starting from the first nonzero
value need be stored. The line separating the top zeroes from the first nonzero element is called the skyline .
Consider the example
For efficiency, only the
active columns need be
stored. These can be
stored in a column
vector A and a diagonal
pointer vector ID as
2.2 Frontal Solution (Banded, Symmetric Banded Matrices)
• It is also known as the wave front solution
• For symmetric and banded coefficient matrix (the stiffness matrix), it is useful.
• Here, assembly of the system stiffness matrix is combined with the solution phase.
• The method results in a considerable reduction in computer memory requirements, especially for large
models.
• The stiffness matrix is banded and sparse (many zero-valued terms)
• In the frontal solution technique, the entire system stiffness matrix is not assembled as such. Instead,
method utilizes the fact that a degree of freedom (an unknown) can be eliminated when the rows and
columns of the stiffness matrix corresponding to that degree of freedom are complete.
• Eliminating a degree of freedom means that we can write an equation for that degree of freedom
in terms of other degrees of freedom and forcing functions. When such an equation is obtained,
it is written to a file and removed from memory.
• The net result is triangularization of the system stiffness matrix and the solutions are obtained by simple
back substitution.

Figure: A system of bar elements (An assembly of bars)


Frontal Method
• Start by defining a 6 × 6 null matrix [K] and proceed with the assembly step, taking the
elements in numerical order. Adding the element stiffness matrix for element 1 to the system
matrix, we obtain

Since U1 is associated only with element 1, displacement


U1 appears in none of the other equations and can be
eliminated now. The first and second row of Equations are
kU1  kU 2  F1 &  kU1  kU 2  F2
First eqn can be solved for U1 once U2 is known.
Mathematically eliminating U1 from the second row, we
have Displacement U2 does not appear in any remaining
equations and is now eliminated to obtain
In sequence, processing the remaining elements and following the elimination procedure
results in

Observe that the procedure has triangularized the system stiffness matrix without formally assembling
that matrix. If we take out the constraint equation, the remaining equations are easily solved by back
substitution. Here, forces are known.

2.3 Data Storage and Memory Optimization


Efficient data storage with minimum space in computer memory helps to occupy less space in
computer memory that increases computer’s efficiency.
2. Iterative Solutions
a) Gauss-Siedel Method
b) Conjugate Gradient Method

a) Gauss-Seidel Method
An iterative method.
Basic Procedure:
-Algebraically solve each linear equation for xi
-Assume an initial guess solution array
-Solve for each xi and repeat
-Use absolute relative approximate error after each iteration
to check if error is within a pre-specified tolerance.
Gauss-Seidel Method
Why?
The Gauss-Seidel Method allows the user to control round-off error.

Elimination methods such as Gaussian Elimination and LU


Decomposition are prone to prone to round-off error.

Also: If the physics of the problem are understood, a close initial


guess can be made, decreasing the number of iterations needed.
Gauss-Seidel Method
Algorithm
A set of n equations and n unknowns:
If: the diagonal elements are
a11x1  a12 x2  a13 x3  ...  a1n xn  b1 non-zero
a21 x1  a22 x2  a23 x3  ...  a2n xn  b2 Rewrite each equation solving
. .
. . for the corresponding unknown
. .
ex:
an1x1  an 2 x2  an3 x3  ...  ann xn  bn
First equation, solve for x1
Second equation, solve for x2
Gauss-Seidel Method
Algorithm
Rewriting each equation
c1  a12 x 2  a13 x3   a1n x n
x1  From Equation 1
a11

c2  a21 x1  a23 x3   a2 n xn
x2  From equation 2
a22
  
c  a x  a x   an 1,n  2 xn  2  an 1,n xn From equation n-1
xn 1  n 1 n 1,1 1 n 1, 2 2
an 1,n 1
cn  an1 x1  an 2 x2    an ,n 1 xn 1
xn  From equation n
ann
Gauss-Seidel Method
Algorithm
General Form of each equation
n

a
n
c1   a1 j x j cn 1  n 1, j xj
j 1 j 1
j  n 1
x1 
j 1 xn 1 
a11 an 1,n 1
n
n
c2   a2 j x j c n   a nj x j
j 1 j 1
j n
x2 
j 2
xn 
a 22 a nn
Gauss-Seidel Method
Algorithm
General Form for any row ‘i’
n
ci   aij x j
j 1
j i
xi  , i  1,2, , n.
aii

How or where can this equation be used?


Gauss-Seidel Method
Solve for the unknowns
Assume an initial guess for [X] Use rewritten equations to solve for
each value of xi.
Important: Remember to use the
 x1  most recent value of xi. Which
x  means to apply values calculated to
 2 the calculations remaining in the
current iteration.
  
 
 xn-1 
 xn 
Gauss-Seidel Method

Calculate the Absolute Relative Approximate Error


x new
x old
a i  i
new
i
100
x i

So when has the answer been found?

The iterations are stopped when the absolute relative


approximate error is less than a prespecified tolerance for all
unknowns.
Gauss-Seidel Method: Example 1
The upward velocity of a rocket
is given at three different times
Table 1 Velocity vs. Time data.

Time, t s Velocity v m/s


5 106.8
8 177.2
12 279.2

The velocity data is approximated by a polynomial as:


vt   a1t 2  a2t  a3 , 5  t  12.
Gauss-Seidel Method: Example 1

Using a Matrix template of the form t12 t1 1  a1   v1 


2 
t 2 t 2 1 a2   v2 
t32 t3 1  a3  v3 

 25 5 1  a1  106.8 
The system of equations becomes  64 8 1 a   177.2 
   2  
144 12 1 a3  279.2

 a1  1
 a    2
Initial Guess: Assume an initial guess of
 2  
 a3  5
Gauss-Seidel Method: Example 1

Rewriting each equation


106.8  5a 2  a3
a1 
25
 25 5 1  a1  106.8 
 64 8 1 a   177.2  177.2  64a1  a3
   2   a2 
144 12 1 a3  279.2 8

279.2  144a1  12a 2


a3 
1
Gauss-Seidel Method: Example 1
Applying the initial guess and solving for ai
 a1  1 106.8  5(2)  (5)
a1   3.6720
 a    2 25
 2  
 a3  5 177.2  643.6720  5
a2   7.8510
Initial Guess 8
279.2  1443.6720  12 7.8510
a3   155.36
1

When solving for a2, how many of the initial guess values were used?
Gauss-Seidel Method: Example 1
Finding the absolute relative approximate error
xinew  xiold At the end of the first iteration
a i  new
100
xi
 a1   3.6720 
a    7.8510
3.6720  1.0000  2  
a 1  x100  72.76%
3.6720  a3    155.36 

 7.8510  2.0000 The maximum absolute


a  x100  125.47% relative approximate error is
2
 7.8510
125.47%

 155.36  5.0000
a 3  x100  103.22%
 155.36
Gauss-Seidel Method: Example 1
Iteration #2
Using
 a1   3.6720  the values of ai are found:
a    7.8510 106.8  5 7.8510  155.36
 2   a1   12.056
 a3    155.36  25

from iteration #1
177.2  6412.056  155.36
a2   54.882
8

279.2  14412.056  12 54.882


a3   798.34
1
Gauss-Seidel Method: Example 1

Finding the absolute relative approximate error


12.056  3.6720 At the end of the second iteration
a 1  x100  69.543%
12.056  a   12.056 
1
a     54.882
 2  
 54.882   7.8510  a3   798.54
a 2  x100  85.695%
 54.882
The maximum absolute
 798.34   155.36 relative approximate error is
a 3  x100  80.540% 85.695%
 798.34
Gauss-Seidel Method: Example 1
Repeating more iterations, the following values are obtained

Iteration a1 a 1 % a2 a 2 % a3 a 3 %
1 3.6720 72.767 −7.8510 125.47 −155.36 103.22
2 12.056 69.543 −54.882 85.695 −798.34 80.540
3 47.182 74.447 −255.51 78.521 −3448.9 76.852
4 193.33 75.595 −1093.4 76.632 −14440 76.116
5 800.53 75.850 −4577.2 76.112 −60072 75.963
6 3322.6 75.906 −19049 75.972 −249580 75.931
Notice – The relative errors are not decreasing at any significant rate
 a 1  0.29048
Also, the solution is not converging to the true solution of    
a
 2   19 .690 
a 3   1.0857 
Gauss-Seidel Method: Pitfall
What went wrong?
Even though done correctly, the answer is not converging to the
correct answer
This example illustrates a pitfall of the Gauss-Siedel method: not all
systems of equations will converge.

Is there a fix?
One class of system of equations always converges: One with a diagonally
dominant coefficient matrix.

Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:


n n
aii   aij for all ‘i’ and aii   aij for at least one ‘i’
j 1 j 1
j i j i
Gauss-Seidel Method: Pitfall
Diagonally dominant: The coefficient on the diagonal must be at least
equal to the sum of the other coefficients in that row and at least one row
with a diagonal coefficient greater than the sum of the other coefficients
in that row.
Which coefficient matrix is diagonally dominant?

 2 5.81 34 124 34 56 


A   45 43 1  [B]   23 53 5 
123 16 1   96 34 129

Most physical systems do result in simultaneous linear equations that


have diagonally dominant coefficient matrices.
Gauss-Seidel Method: Example 2
Given the system of equations The coefficient matrix is:

12x1  3x2- 5x3  1 12 3  5


x1  5x2  3x3  28 A   1 5 3 
3x1  7 x2  13x3  76  3 7 13 

With an initial guess of Will the solution converge using the


Gauss-Siedel method?
 x1  1
 x   0 
 2  
 x3  1
Gauss-Seidel Method: Example 2

Checking if the coefficient matrix is diagonally dominant


a11  12  12  a12  a13  3   5  8
12 3  5
A   1 5 3  a 22  5  5  a 21  a 23  1  3  4
 3 7 13 
a33  13  13  a31  a32  3  7  10

The inequalities are all true and at least one row is strictly greater than:
Therefore: The solution should converge using the Gauss-Siedel Method
Gauss-Seidel Method: Example 2
Rewriting each equation With an initial guess of
12 3  5  a1   1   x1  1
 1 5 3  a   28  x   0 
   2    2  
 3 7 13  a3  76  x3  1

1  3 x 2  5 x3 1  30  51
x1  x1   0.50000
12 12

28  x1  3x3 28  0.5  31


x2  x2   4.9000
5 5

76  3x1  7 x 2 76  30.50000  74.9000


x3  x3   3.0923
13 13
Gauss-Seidel Method: Example 2
The absolute relative approximate error
0.50000  1.0000
a 1  100  100.00%
0.50000

4.9000  0
a 2
 100  100.00%
4.9000

3.0923  1.0000
a 3
 100  67.662%
3.0923

The maximum absolute relative error after the first iteration is 100%
Gauss-Seidel Method: Example 2
After Iteration #1
 x1  0.5000
 x   4.9000
 2  
 x3  3.0923

Substituting the x values into the After Iteration #2


equations
 x1  0.14679
1  34.9000  53.0923  x    3.7153 
x1   0.14679
12  2  
 x3   3.8118 
28  0.14679  33.0923
x2   3.7153
5

76  30.14679  74.900
x3   3.8118
13
Gauss-Seidel Method: Example 2

Iteration #2 absolute relative approximate error


0.14679  0.50000
a 1  100  240.61%
0.14679
3.7153  4.9000
a 2
 100  31.889%
3.7153
3.8118  3.0923
a 3
 100  18.874%
3.8118
The maximum absolute relative error after the first iteration is 240.61%

This is much larger than the maximum absolute relative error obtained in
iteration #1. Is this a problem?
Gauss-Seidel Method: Example 2
Repeating more iterations, the following values are obtained
Iteration a1 a 1 % a2 a 2 % a3 a 3 %

1 0.50000 100.00 4.9000 100.00 3.0923 67.662


2 0.14679 240.61 3.7153 31.889 3.8118 18.876
3 0.74275 80.236 3.1644 17.408 3.9708 4.0042
4 0.94675 21.546 3.0281 4.4996 3.9971 0.65772
5 0.99177 4.5391 3.0034 0.82499 4.0001 0.074383
6 0.99919 0.74307 3.0001 0.10856 4.0001 0.00101

 x1  0.99919  x1  1
The solution obtained  x    3.0001  is close to the exact solution of  x    3 .
 2    2  
 x3   4.0001   x3  4
Gauss-Seidel Method: Example 3
Given the system of equations

3x1  7 x2  13x3  76 Rewriting the equations

x1  5x2  3x3  28 76  7 x2  13x3


x1 
12x1  3x2  5x3  1 3
28  x1  3x3
With an initial guess of x2 
5
 x1  1
 x   0  1  12 x1  3x 2
 2   x3 
 x3  1 5
Gauss-Seidel Method: Example 3
Conducting six iterations, the following values are obtained

Iteration a1 a 1 % A2 a 2 % a3 a 3 %
1 21.000 95.238 0.80000 100.00 50.680 98.027
2 −196.15 110.71 14.421 94.453 −462.30 110.96
3 −1995.0 109.83 −116.02 112.43 4718.1 109.80
4 −20149 109.90 1204.6 109.63 −47636 109.90
5 2.0364×105 109.89 −12140 109.92 4.8144×105 109.89
6 −2.0579×105 109.89 1.2272×105 109.89 −4.8653×106 109.89

The values are not converging.


Does this mean that the Gauss-Seidel method cannot be used?
Gauss-Seidel Method
The Gauss-Seidel Method can still be used
 3 7 13 
The coefficient matrix is not
diagonally dominant
A   1 5 3 
12 3  5
But this is the same set of 12 3  5
equations used in example #2,
which did converge.
A   1 5 3 
 3 7 13 

If a system of linear equations is not diagonally dominant, check to see if


rearranging the equations can form a diagonally dominant matrix.
Gauss-Seidel Method
Not every system of equations can be rearranged to have a
diagonally dominant coefficient matrix.
Observe the set of equations

x1  x2  x3  3
2 x1  3x2  4 x3  9
x1  7 x2  x3  9

Which equation(s) prevents this set of equation from having a


diagonally dominant coefficient matrix?
Gauss-Seidel Method
Summary

- Advantages of the Gauss-Seidel Method


- Algorithm for the Gauss-Seidel Method
- Pitfalls of the Gauss-Seidel Method
Gauss-Seidel Method

Questions?
2 b) Conjugate Gradient Method
The conjugate gradient method
is an iterative method that is
used to solve large symmetric
linear systems Ax=b.

Method is efficient especially


when the matrix A is sparse.

x0∈RN,r0=b−x0,p0=r0

rk=b−Axkrk=b−Axk which is the residual


at iteration k

Two non-zero vectors u and v are


conjugate (with respect to A) if
u T Av  0
Since A is symmetric and positive-definite,
the left-hand side defines an inner product

u T Av  u , v A :  Au , v  u , AT v  u , Av
Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being
conjugate is a symmetric relation: if u is conjugate to v, then v is conjugate to u. Suppose that
P   p1 ,..., pn 
n
is a set of n mutually conjugate vectors (with respect to A). Then P forms a basis for , and we may
express the solution x* of Ax  b in this basis:

n

x*   i pi
i 1

Based on this expansion we calculate


  
n n n

Ax*   i Api  p Ax* 


T
k  i p Api  p b 
T
k
T
k  i pkT Api
i 1 i 1 i 1

pkT b   k pkT Apk  u T v  0 & i  k : pkT Api  0 


pkT b
k  T
pk Apk
The conjugate gradient method is an iterative method.
It approximately solve systems, where n is so large since the direct method would take too much time
The initial guess for x∗ by x0 (we can assume without loss of generality that x0 = 0,
Otherwise consider the system Ax∗= b − Ax0
The solution x∗ should minimize the quadratic equation
1 T
f ( x)  x Ax  xT b, x n

2
The existence of a unique minimizer is apparent since its second derivative is given by a symmetric positive-
definite matrix
 2 f ( x)  A
and that the minimizer (use Df(x)=0) solves the initial problem is obvious from its first derivative
f ( x)  Ax  b

Taking the first basis vector p0 (residual) to be the negative of the gradient of f at x = x0.
The gradient of f equals Ax − b. Starting with an initial guess x0, this means we take p0 = b − Ax0.
The other vectors in the basis will be conjugate to the gradient, hence the name conjugate gradient method.
Let rk be the residual at the kth step:
rk  b  Axk
rk is the negative gradient of f at x = xk, so the gradient descent method would require to move in the direction rk.
However, we insist that the directions pk be conjugate to each other. A practical way to enforce this, is by requiring
that the next search direction be built out of the current residual and all previous search directions. This gives the


following expression: piT Ark
pk  rk  T
pi
ik
pi pi
Next optimal location is given by:
xk 1  xk   k pk
pkT (b  Axk ) pkT rk
and, k  T
 T
pk Apk pk Apk
The expression for 𝜶k is derived by minimizing the expression for f(xk+1) and minimizing it wrt 𝜶k:

The algorithm requires storage of all previous searching directions and residue vectors, as well as many
matrix-vector multiplications, and thus can be computationally expensive. However, a closer analysis of
the algorithm shows that ri is orthogonal to rj , i.e., riT rj  0 (i  j ) . Also, pi is A-orthogonal pj, i.e.,
piT Ap j  0 (i  j ). This can be regarded that as the algorithm progresses, p and r span the same Krylov subspace.
i i
Here, ri form the orthogonal basis with respect to standard inner product, and pi form the orthogonal
basis with respect to inner product induced by A. Therefore, xk can be regarded as the projection of x on
the Krylov subspace.

The algorithm is detailed below for solving Ax = b where A is a real,


symmetric, positive-definite matrix. Vector x0 can be an approximate
initial solution or 0.
Conjugate Gradient Algorithm
Exact Solution:

Solution:
Problem:
Fourier Integrals
Discrete Fourier Transform (DFT)
Fast Fourier Transform (FFT)
Discrete Fourier Transform (DFT) and Fast Fourier Transform (FFT)

• Vibrations of civil engineering structures can lead to resonance and collapse or


breaks propagating in structural elements.
• The most striking example is Tacoma Narrows Bridge.
• Wind waves excited vibrations in a suspended bridge, which became resonant
(sines and cosines) with large amplitude and everything broke. A movie is
available. After this you will understand why it matters to study spectrum and
amplitude of frequency responses, to find resonance frequencies.
Sensors in Civil Engineering Curriculum (North Caroline State Univ.)

• Sensors are increasingly used by practicing civil engineers to monitor,


measure, and remotely observe the world around them. However,
these technologies are not present in traditional CCEE curricula.
• A research team of T.M. Evans (PI), M.A. Gabr (co-PI), Z. Ning (GRA),
and C. Markham (URA) received a grant from the National Science
Foundation to address this issue by developing educational modules
that will be incorporated in a series of CCEE courses.
• Each module will build on the content from previous models,
providing both breadth and depth to your studies.
Frequency Domain Analysis of Signals
A Voice Wave Form in Time and Frequency Domain
Fourier Integrals
• For non-periodic applications (or a specialized Fourier series when the period of
the function is infinite: L)

-L L

-L- L

  nx   nx  
f L ( x )  a0    a n cos   bn sin  
n 1   L   L 

n
 a0   a n coswn x   bn sin wn x , wn 
n 1 L

  a n coswn x   bn sin wn x 
n 0

( n  1) n 
Note that : Δw  wn 1  wn   
L L L
f L ( x) 

1 cosw x w L f ( v ) cos(w v )dv  sin w x w L f ( v ) sin( w v )dv 

 n 0  n L L n n L L n 
As L  , w  0,   ( )w   ( )dw
f L ( x) 

1 cosw x w L f ( v ) cos(w v )dv  sin w x w L f ( v ) sin( w v )dv 

 n 0  n L L n n L L n 

  coswx  f ( v ) cos(wv )dv  sin wx  f ( v ) sin( wv )dv dw


1   

 0    

1 
f ( v ) cos(wv )dvcos(wx )  f ( v ) sin( wv )dvsin( wx )dw
 1 
   
0 
   


f ( x )   A( w) cos(wx )  B( w) sin( wx )dw : Fourier integral of f(x)
0

1  1 

 
where A( w)  f ( v ) cos(wv )dv, B( w)  f ( v ) sin( wv )dv
 
Fourier Cosine & Sine Integrals

1
If the function f(x) is even  A(w) 
  f (v ) cos(wv )dv

0  
1 1 2

  ( )dv    ( )dv    f (v ) cos(wv )dv
 0 0

B( w)  0

f ( x )   A( w) cos(wx )dw : Fourier Cosine Integral
0

1
If the function f(x) is odd  B(w) 
  f (v ) sin( wv )dv


A( w)  0, f ( x )   B( w) sin( wx )dw : Fourier Sine Integral
0
Example
1 for - 1  x  1
Let f(x)  
0 for x  1
 1
1 1 2 sin( w)
A( w)   f ( v ) cos(wv )dv   cos(wv )dv 
   1 w
1 1
1 1
B( w) 
  f ( v ) sin( wv )dv   sin( wv )dv 0
1
 1

The Fourier integral of f is


 
2 sin( w)
f(x)   A( w) cos(wx )dw   cos(wx )dw
0 0
w
f10 integrate from 0 to 10
f100 integrate from 0 to 100
g(x) the real function

1.5

1
f 10 ( x)

f 100 ( x)

g( x)
0

 0.5
2 1 0 1 2
2 x 2
Similar to Fourier series approximation, the Fourier integral approximation
improves as the integration limit increases. It is expected that the integral will
converges to the real function when the integration limit is increased to infinity.

Physical interpretation: The higher the integration limit means more higher
frequency sinusoidal components have been included in the approximation.
(similar effect has been observed when larger n is used in Fourier series
approximation)
This suggests that we can be interpreted as the frequency of each of the
sinusoidal wave used to approximate the real function.
Suggestion: A(w) can be interpreted as the amplitude function of the specific
sinusoidal wave (similar to the Fourier coefficient in Fourier series expansion).
Fourier Cosine Transform
For an even function f(x) :
 
2
f ( x )   A( w) cos(wx )dw, where A( w)   f ( v ) cos(wv )dv.
0
 0

2 ˆ
Define A(w)  f c ( w)


 2
fˆc ( w)  A( w)   f ( x ) cos(wx )dx, v has been replaced by x
2  0

fˆc ( w) is called the Fourier cosine transform of f(x)

 
2
f ( x )   A( w) cos(wx )dw   fˆ ( w) cos(wx )dw
0
 0
c

f ( x ) is the inverse Fourier cosine transform of fˆc ( w)


Fourier Sine Transform
Similarly, for an odd function f(x) :
 
2
f ( x )   B( w) sin( wx )dw, where B( w)   f ( v ) sin( wv )dv.
0
 0

2 ˆ
Define B(w)  f S ( w)


 2
fˆS ( w)  B( w)   f ( x ) sin( wx )dx, v has been replaced by x
2  0

fˆS ( w) is called the Fourier sine transform of f(x)

 
2
f ( x )   B( w) sin( wx )dw   fˆ ( w) sin( wx )dw
0
 0
S

f ( x ) is the inverse Fourier sine transform of fˆS ( w)


Signals (Based on Wanjun Huang)
Periodic Signals
Time Frequency Analysis
Fourier Transform
Fourier Representation for Four Types of Signals
Periodic Sequence
Discrete Fourier Series (DFS)
Finite Length Sequence
Relationship Between Finite Length Sequence
and Periodic Sequence
Discrete Fourier Transform (DFT)
DFT Example
Fast Fourier Transform (FFT)
Fast Fourier Transform (FFT)
Introduction to Matlab
Data Representation in Matlab
Mathematical Functions of Matlab
Basic Plotting in Matlab
Example 1: Sine Wave
Example 2: Cosine Wave
Example 3: Cosine Wave with Phase Shift
Example 4: Square Wave
Example 5: Square Pulse
Example 6: Gaussian Pulse
Example 7: Exponential Decay
Example 8: Chirp Signal

You might also like