0% found this document useful (0 votes)
5 views

Numerical Methods

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Numerical Methods

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Numerical Meods

Lec. 1: Curve Fiing  Linear Regreion


❋ Numerical methods are techniques by which mathematical problems are
formulated so that they can be solved with arithmetic and logical operations.
❋ Curve fitting is the process of constructing a curve, or mathematical function,
that has the best fit to a series of data points, possibly subject to constraints.

Curve Fiing
Regreion Interpolation
Deriving a single Fitting a curve
curve that or a series of
represents the curves that pass
general trend of directly through
the data, where the data exhibit a each of the data points, where the
significant degree of error. data are known to be very precise.

1 Linear Regreion
❋ Linear regression attempts to model i o n Lin
e
s
the relationship between two variables by res
Reg
fitting a linear equation to observed data.
The mathematical expression for a straight
line is , where and are
coefficients representing
representing the
the intercept
intercept and
andthe slope, respectively, and is the error, or
the slop A residual is the difference between an observed value and the fitted value
residual.
provided by a model, which can be represented as .

The method of least squares is a parameter estimation method in regression


analysis based on minimizing the sum of the squares of the residuals. Linear least
squares regression has a number of advantages, including that it yields a unique
line for a given set of data points , and that it
removes the effect of the signs of the residual errors.
The sum of the squares of
the residuals is given by:
❋ LeastSquares Fit of a Straight Line
To determine values for and , such that the sum of square residuals is
minimized, is differentiated with respect to each unknown coefficient:

Setting these derivatives equal to zero will result in a minimum . If this is done,
the equations can be expressed as

We can express the equations as a set


of two simultaneous linear equations
with two unknowns ( and ). These
are called the normal equations.
❋ Note:

where and are the means of and , respectively.


normal equations

Quantification of ror of Linear Regreion


Standard Error Standard
of the Estimate Deviation
Coefficient of Correlation
Determination Coefficient
If , we say
“the method has merit”

❋ Linearization of Nonlinear Relationships


We can use linear regression to fit data
to a non-linear function (curve) instead
of a straight line by using linearization Linearization Slope

to convert the data to points that fit a


Intercept
linear model, then revert to the original
model when we‛re done.
Exponential model Growth-rate model Power model
Lec. 2: Curve Fiing  Polynomial Regreion
2 Polynomial Regreion
❋ The polynomial regression model

gives us an inconsistent system of linear Quadratic Regression


equations which can be written in matrix form.

In matrix notation, the equation for a polynomial fit is given by . We can look
for an that minimizes the norm of the error, . This is an example of a
least squares problem, the problem of minimizing a sum of squares. For this case the
sum of the squares to be minimized is the sum of the squares of the residuals.

Minimize

From linear algebra, this problem can be solved by premultiplying by the transpose
, and the solution of the least squares problem comes down to solving the
linear system of equations . These equations are
called the normal equations of the least squares problem. This matrix equation can
be inverted directly to yield the solution vector .

normal equations Standard Error


of the Estimate

Linear Regression Quadratic Regression


Lec. 3: Solving Nonlinear Equations
❋ In numerical analysis, a root-finding algorithm is an algorithm for finding zeros,
also called "roots", of continuous functions. A zero of a function , is a number
such that . As, generally, the zeros of a function cannot be computed
exactly nor expressed in closed form, root-finding algorithms provide
approximations to zeros. Solving an equation is the same as finding the
roots of the function .Thus root-finding algorithms allow
solving any equation defined by continuous functions.

RootFinding Algorims
acketing Meods Open Meods
These are based on two initial These methods can involve one or
guesses that “bracket” the more initial guesses, but there is no
root—that is, are on either side of need for them to bracket the root.
the root. For well-posed problems, They do not always work (i.e., they
the bracketing methods always work can diverge), but when they do they
but converge slowly. They include usually converge quicker. They
the bisection method and the false include the Newton–Raphson method
position method “regula falsi”. and the secant method.

1 Bisection Meod
❋ The input for the method is a
continuous function , an interval ,
and the function values and .
The function values are of opposite signs.
In this case and are said to bracket a
root since, by the intermediate value theorem, the continuous function must have
at least one root in the interval . The method consists of repeatedly bisecting
the interval defined by these values and then selecting the subinterval in which the
function changes sign, and therefore must contain a root.
❋ Iteration tasks:
1 Calculate , the midpoint of the interval.

2 Calculate the function value at the midpoint, .


Maximum
If convergence is satisfactory return and stop
3
iterating. Error Bound
number of
If, then the new interval is , iterations
4
otherwise it‛s .
2 False Position Meod
❋ The regula falsi (also called the linear
interpolation method) method calculates the new
solution estimate as the -intercept of the line
segment joining the endpoints of the function on
the current bracketing interval. Essentially, the
root is being approximated by replacing the actual function by a line segment on the
bracketing interval and then using the false-position formula on that line segment.
❋ Iteration tasks:
1 Calculate from the flase-position formula.
2 Calculate the function value at the intercept, .
If convergence is satisfactory return and stop
3
iterating.
If, then the new interval is ,
4
otherwise it‛s .

3 Newton-Raphson Meod
❋ Newton's method, also known as the
Newton–Raphson method, named after Isaac Newton
and Joseph Raphson, starts with an initial guess, then
approximates the function by its tangent line, and
finally computes the -intercept of this tangent line.
This -intercept will typically be a better approximation to the original function's
root than the first guess, and the method can be iterated.

Rearrange
Newton-Raphson
Formula
4 Secant Meod
❋ The secant method can be thought of
as a finite-difference approximation of
Newton's method by approximating the
derivative by a backward finite divided
difference.
❋ Note:
Lec. 4: Solving Linear Systems of Equations
1 Jacobi Meod
First Iteration
❋ In numerical linear algebra, the
Jacobi method (a.k.a. the Jacobi
iteration method) is an iterative
algorithm for determining the
solutions of a strictly diagonally
dominant system of linear equations.
Second Iteration
Each diagonal element is solved for,
and an approximate value is plugged in.
The process is then iterated until it
converges. The method is named after
Carl Gustav Jacob Jacobi.

2 Gau-Seidel Meod ❋ Note:


Often, the initial values are

❋ In numerical linear algebra, the


Gauss–Seidel method, also known as
the Liebmann method or the method
First Iteration
of successive displacement, is an
iterative method used to solve a
system of linear equations. It is named
after the German mathematicians Carl
Friedrich Gauss and Philipp Ludwig von Second Iteration
Seidel, and is similar to the Jacobi
method. Though it can be applied to
any matrix with non-zero elements on
the diagonals, convergence is only
guaranteed if the matrix is either
strictly diagonally dominant, or
symmetric and positive definite.

diagonay dominant matrix


❋ A square matrix is diagonally dominant if:

where denotes the entry in the ith row and jth column.
If a strict inequality ( ) is used, this is called strict diagonal dominance.
Lec. 5: Curve Fiing  Interpolation
❋ The polynomial interpolation problem is to find a polynomial which satisfies
for given data points .

1 Vandermonde Matrix
❋ This problem can be reformulated in terms of linear algebra by means of the
Vandermonde matrix , as follows. computes the values of at the points
via a matrix multiplication , where is the
vector of coefficients and is the vector of
values (both written as column vectors). The Vandermonde matrix is named after
Alexandre-Théophile Vandermonde.

If and are distinct, then is a square matrix with non-zero


determinant, i.e. an invertible matrix. Thus, given and , one can find the
required by solving for its coefficients .

2 Lagrange Polynomial Linear Interpolation

❋ Given a set of nodes ,


which must all be distinct, the Lagrange basis
for polynomials of degree for those
nodes is the set of polynomials
each of degree
which take values if and Quadratic Interpolation
.

The Lagrange interpolating polynomial for


those nodes through the corresponding
values is the linear combination: Cubic Interpolation

You might also like