Numerical Methods
Numerical Methods
Curve Fiing
Regreion Interpolation
Deriving a single Fitting a curve
curve that or a series of
represents the curves that pass
general trend of directly through
the data, where the data exhibit a each of the data points, where the
significant degree of error. data are known to be very precise.
1 Linear Regreion
❋ Linear regression attempts to model i o n Lin
e
s
the relationship between two variables by res
Reg
fitting a linear equation to observed data.
The mathematical expression for a straight
line is , where and are
coefficients representing
representing the
the intercept
intercept and
andthe slope, respectively, and is the error, or
the slop A residual is the difference between an observed value and the fitted value
residual.
provided by a model, which can be represented as .
Setting these derivatives equal to zero will result in a minimum . If this is done,
the equations can be expressed as
In matrix notation, the equation for a polynomial fit is given by . We can look
for an that minimizes the norm of the error, . This is an example of a
least squares problem, the problem of minimizing a sum of squares. For this case the
sum of the squares to be minimized is the sum of the squares of the residuals.
Minimize
From linear algebra, this problem can be solved by premultiplying by the transpose
, and the solution of the least squares problem comes down to solving the
linear system of equations . These equations are
called the normal equations of the least squares problem. This matrix equation can
be inverted directly to yield the solution vector .
RootFinding Algorims
acketing Meods Open Meods
These are based on two initial These methods can involve one or
guesses that “bracket” the more initial guesses, but there is no
root—that is, are on either side of need for them to bracket the root.
the root. For well-posed problems, They do not always work (i.e., they
the bracketing methods always work can diverge), but when they do they
but converge slowly. They include usually converge quicker. They
the bisection method and the false include the Newton–Raphson method
position method “regula falsi”. and the secant method.
1 Bisection Meod
❋ The input for the method is a
continuous function , an interval ,
and the function values and .
The function values are of opposite signs.
In this case and are said to bracket a
root since, by the intermediate value theorem, the continuous function must have
at least one root in the interval . The method consists of repeatedly bisecting
the interval defined by these values and then selecting the subinterval in which the
function changes sign, and therefore must contain a root.
❋ Iteration tasks:
1 Calculate , the midpoint of the interval.
3 Newton-Raphson Meod
❋ Newton's method, also known as the
Newton–Raphson method, named after Isaac Newton
and Joseph Raphson, starts with an initial guess, then
approximates the function by its tangent line, and
finally computes the -intercept of this tangent line.
This -intercept will typically be a better approximation to the original function's
root than the first guess, and the method can be iterated.
Rearrange
Newton-Raphson
Formula
4 Secant Meod
❋ The secant method can be thought of
as a finite-difference approximation of
Newton's method by approximating the
derivative by a backward finite divided
difference.
❋ Note:
Lec. 4: Solving Linear Systems of Equations
1 Jacobi Meod
First Iteration
❋ In numerical linear algebra, the
Jacobi method (a.k.a. the Jacobi
iteration method) is an iterative
algorithm for determining the
solutions of a strictly diagonally
dominant system of linear equations.
Second Iteration
Each diagonal element is solved for,
and an approximate value is plugged in.
The process is then iterated until it
converges. The method is named after
Carl Gustav Jacob Jacobi.
where denotes the entry in the ith row and jth column.
If a strict inequality ( ) is used, this is called strict diagonal dominance.
Lec. 5: Curve Fiing Interpolation
❋ The polynomial interpolation problem is to find a polynomial which satisfies
for given data points .
1 Vandermonde Matrix
❋ This problem can be reformulated in terms of linear algebra by means of the
Vandermonde matrix , as follows. computes the values of at the points
via a matrix multiplication , where is the
vector of coefficients and is the vector of
values (both written as column vectors). The Vandermonde matrix is named after
Alexandre-Théophile Vandermonde.