0% found this document useful (0 votes)
22 views

Numericalsss

The document discusses the mathematical foundations of numerical methods. It covers key concepts like approximation and errors, Taylor series, interpolation, numerical integration, differentiation, root finding, linear systems, eigenvalue problems, and ordinary differential equations. Numerical methods are techniques used to find approximate solutions for problems without exact analytical solutions.

Uploaded by

Sepillo Randel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Numericalsss

The document discusses the mathematical foundations of numerical methods. It covers key concepts like approximation and errors, Taylor series, interpolation, numerical integration, differentiation, root finding, linear systems, eigenvalue problems, and ordinary differential equations. Numerical methods are techniques used to find approximate solutions for problems without exact analytical solutions.

Uploaded by

Sepillo Randel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Sepillo, Randel M

BSEE-3rd Year

Mathematical Foundation of Numerical methods

The mathematical foundation of numerical methods lies at the intersection of


mathematics, computer science, and engineering. Numerical methods are techniques used
to approximate solutions to mathematical problems that may not have exact analytical
solutions. These methods involve using computers to perform calculations and simulations
to obtain approximate solutions.

Numerical Methods by definition mean the application of a certain analytical


technique to solve a certain problem. From quantitative modelling point of view, there are
many problems for which we do not have a ready formula (i.e., a closed form solution). For
such instances, we have to resort to some form of a numerical approximation to arrive at
the most optimal solution to the problem. Numerical techniques are extensively used in a
variety of domains including fluid mechanics, engineering, pure / applied mathematics,
finance etc. Numerical methods are used for various problems including optimization, root
finding, interpolation etc.

Here are some key mathematical concepts that form the foundation of numerical
methods:

1. Approximation and Errors:

Numerical methods deal with approximations of mathematical quantities. Every


computation involves errors due to finite precision arithmetic, truncation, and round-off
errors. Understanding error analysis is crucial for designing accurate and reliable
numerical algorithms.

Approximation error in some data is the discrepancy between an exact value and
some approximation to it. An approximation error can occur because:

The measurement of the data is not precise due to the instruments. (e.g., the accurate
reading of a piece of paper is 4.5 cm but since the ruler does not use decimals, you round it
to 5 cm.) or

 Approximations

In the mathematical field of numerical analysis, the numerical stability of an


algorithm indicates how the error is propagated by the algorithm.

One Given some value v and its approximation approx, the absolute error is;
Where, the vertical bars denote the absolute value. If v =/= 0, the relative error is;

And, the percent error is

In other words, the absolute error is the magnitude of the difference between the exact
value and the approximation. The relative error is the absolute error divided by the
magnitude of the exact value. The percent error is the relative error expressed in terms of
per 100.

An error bound is an upper limit on the relative or absolute size of an approximation error.
Commonly distinguishes between the relative error and the absolute error.

 How to calculate approximation error?

There are two main ways to gauge how accurate is an estimated error of the actual
(not precisely know) value x:

The absolute approximation error that’s defined as |xe − x|, and

The relative approximation error that’s usually defined as the ratio of the absolute error
and the actual value:

Since the actual value x is now known exactly, we cannot compute the exact values
of the corresponding errors. Often, however, we know the upper bounds on one of these
errors (or on both of them). Such an upper bound serves as a description of approximation
accuracy: we can say that we have an approximation with an accuracy of ±0.1, or with an
accuracy of 5%.

2. Taylor Series and Polynomial Approximations:

Taylor series expansions allow us to approximate functions by polynomials.


Numerical methods often use polynomial approximations to represent more complex
functions, making calculations more manageable.

A Taylor series approximation uses a Taylor series to represent a number as a polynomial


that has a very similar value to the number in a neighborhood around a specified x value:
3. Interpolation and Extrapolation:

Interpolation involves estimating values between known data points, while


extrapolation estimates values beyond the known data range. Techniques like Lagrange
interpolation, Newton interpolation, and spline interpolation are commonly used.

Extrapolation refers to estimating an unknown value based on extending a known


sequence of values or facts. To extrapolate is to infer something not explicitly stated from
existing information. Interpolation is the act of estimating a value within two known
values that exist within a sequence of values.

 Understanding extrapolation and interpolation via prefixes

Both extrapolation and interpolation are useful methods to determine or estimate the
hypothetical values for an unknown variable based on the observation of other datapoints.
However, it can be hard to distinguish between these methods and understand how they
differ from each other.

 Common interpolation and extrapolation methods

Three of the most common interpolation methods are the following:

 linear interpolation
 polynomial interpolation
 spline interpolation

Three of the most common extrapolation methods are the following:

 linear extrapolation
 polynomial extrapolation
 conic extrapolation
4. Numerical Integration:

Numerical integration techniques approximate the definite integral of a function


over a specified interval. Methods like the trapezoidal rule, Simpson's rule, and numerical
quadrature are used to estimate integrals.

• Newton method

Some calculations cannot be solved using algebra or other Mathematical methods. For this
we need to use numerical methods. Newton's method is one such method and allows us to
calculate the solution of f (x) = 0.

• Simpson Law

The other important ones cannot be assessed in terms of integration rules or basic
functions. Simpson's law is a numerical method that calculates the numerical value of a
direct combination.

• Trapezoidal law

A trapezoidal rule is a numerical method that calculates the numerical value of a direct
combination. The other important ones cannot be assessed in terms of integration rules or
basic functions.

5. Numerical Differentiation:

Numerical differentiation approximates the derivative of a function at a given point.


Methods like finite differences and central differences are used to estimate derivatives.

Numerical differentiation is the process of finding the numerical value of a


derivative of a given function at a given point. In general, numerical differentiation is more
difficult than numerical integration. This is because while numerical integration requires
only good continuity properties of the function being integrated, numerical differentiation
requires more complicated properties such as Lipschitz classes. Numerical differentiation
is implemented as ND[f, x, x0, Scale -> scale] in the Wolfram Language package Numerical
Calculus.

There are many applications where derivatives need to be computed numerically.


The simplest approach simply uses the definition of the derivative
6. Root Finding:

In mathematics and computing, a root-finding algorithm is an algorithm for


finding zeroes, also called "roots", of continuous functions. A zero of a function f, from the
real numbers to real numbers or from the complex numbers to the complex numbers, is a
number x such that f(x) = 0. As, generally, the zeroes of a function cannot be computed
exactly nor expressed in closed form, root-finding algorithms provide approximations to
zeroes, expressed either as floating point numbers or as small isolating intervals, or disks
for complex roots (an interval or disk output being equivalent to an approximate output
together with an error bound). Solving an equation f(x) = g(x) is the same as finding the
roots of the function h(x) = f(x) – g(x).

Methods:

 Bracketing Methods
 Interpolation
 Iterative Methods
 Combination of Methods
 Roots of Polynomials

7. Linear Systems:

Solving systems of linear equations is a common problem in various fields.


Numerical methods for solving linear systems include Gaussian elimination, LU
decomposition, and iterative methods like the Jacobi and Gauss-Seidel methods.
8. Eigenvalue Problems:

Eigenvalue problems involve finding eigenvalues and eigenvectors of matrices.


These problems have applications in areas such as physics, engineering, and computer
graphics. Methods like the power iteration and QR algorithm are used for solving
eigenvalue problems.

9. Ordinary Differential Equations (ODEs):

Many scientific and engineering problems involve solving ordinary differential


equations. Numerical methods like Euler's method, Runge-Kutta methods, and finite
difference methods are employed for approximating solutions to ODEs.

In mathematics, the term “Ordinary Differential Equations” also known as ODE is an


equation that contains only one independent variable and one or more of its derivatives
with respect to the variable. In other words, the ODE is represented as the relation having
one independent variable x, the real dependent variable y, with some of its derivatives.

y’,y”, ….yn ,…with respect to x.

 Order

The order of ordinary differential equations is defined to be the order of the highest
derivative that occurs in the equation. The general form of n-th order ODE is given as;

F(x, y,y’,….,yn ) = 0

Note that, y’ can be either dy/dx or dy/dt and yn can be either dny/dxn or dny/dtn.

An n-th order ordinary differential equations is linear if it can be written in the form;

a0(x)yn + a1(x)yn-1 +…..+ an(x)y = r(x)


The function aj(x), 0 ≤ j ≤ n are called the coefficients of the linear equation. The
equation is said to be homogeneous if r(x) = 0. If r(x)≠0, it is said to be a non- homogeneous
equation. Also, learn the first-order differential equation here.

 Types

The ordinary differential equation is further classified into three types. They are:

 Autonomous ODE
 Linear ODE
 Non-linear ODE

10. Partial Differential Equations (PDEs):

A partial differential equation (or briefly a PDE) is a mathematical equation that


involves two or more independent variables, an unknown function (dependent on those
variables), and partial derivatives of the unknown function with respect to the independent
variables. The order of a partial differential equation is the order of the highest derivative
involved. A solution (or a particular solution) to a partial differential equation is a function
that solves the equation or, in other words, turns it into an identity when substituted into
the equation. A solution is called general if it contains all particular solutions of the
equation concerned.

Partial differential equations are used to mathematically formulate, and thus aid the
solution of, physical and other problems involving functions of several variables, such as
the propagation of heat or sound, fluid flow, elasticity, electrostatics, electrodynamics, etc.
11. Convergence and Stability:

Convergence refers to the property of a numerical method to produce solutions that


approach the true solution as the computational effort increases. Stability ensures that
errors introduced during calculations do not grow uncontrollably.

 Stability:

The condition of a mathematical problem relates to its sensitivity to changes in its


input values. We say that a computation is numerically unstable if the uncertainty of the
input values is grossly magnified by the numerical method. Stability means that errors at
any stage of the computation are not amplified but are attenuated as the computation
progresses.

 Convergence:

Methods are said to be convergent because they move closer to the truth as the
computation progresses. Often, the rapid convergence may require more refined initial
guesses and more complex programming than a method with slower convergence.
Depending on your problem and the algorithm you use, it can have varying degree of
consistency, stability and convergence.

 Some examples:

Bisection method has linear rate of convergence. It means, with every iteration, your
solution improves by one decimal places. But, to a greater extent, bisection method
becomes inconsistent for nonlinear equation whose roots are complex! It cannot yield a
complex number as the solution! It is a very stable method for equations whose at-least one
root is real!

Newton-Raphson method has quadratic rate of convergence. It means, with every


iteration, your solution improves by two decimal places, that is faster rate of convergence!
Newton Raphson method is consistent for nonlinear equations whose roots are complex, if
programmed accordingly. But, results obtained from Newton-Raphson method may
oscillate about the local maximum or minimum without converging on a root. It becomes
unstable!

12. Conditioning and Ill-Conditioning:

The conditioning of a problem measures how sensitive the solution is to changes in


input data. Ill-conditioned problems can lead to numerical instability and inaccurate
results.
References:

https://abhyankar-ameya.medium.com/foundations-of-numerical-methods-the-pythonic-way-
d52be072ae20

https://www.engati.com/glossary/approximation-error

https://brilliant.org/wiki/taylor-series-
approximation/#:~:text=A%20Taylor%20series%20approximation%20uses,%E2%80%B2%20%E2%80%B2
%20(%20a%20)%202%20

https://www.techtarget.com/whatis/definition/extrapolation-and-
interpolation#:~:text=Extra%2D%20refers%20to%20%22in%20addition,value%20in%20between%20exis
ting%20values

https://www.geeksforgeeks.org/trapezoidal-rule/

https://amsi.org.au/ESA_Senior_Years/SeniorTopic3/3j/3j_2content_2.html#:~:text=Newton's%20meth
od%20for%20solving%20equations,requires%20calculus%2C%20in%20particular%20differentiation

https://mathworld.wolfram.com/NumericalDifferentiation.html

https://encyclopedia.pub/entry/29233

https://www.mathsisfun.com/algebra/systems-linear-equations.html

https://www.cs.toronto.edu/~enright/teaching/D37/Week4.pdf

https://byjus.com/maths/ordinary-differential-
equations/#:~:text=An%20ordinary%20differential%20equation%20(also,with%20one%20or%20more%
20derivatives

http://www.scholarpedia.org/article/Partial_differential_equation#:~:text=A%20partial%20differential%
20equation%20(or,respect%20to%20the%20independent%20variables

You might also like