Numericalsss
Numericalsss
BSEE-3rd Year
Here are some key mathematical concepts that form the foundation of numerical
methods:
Approximation error in some data is the discrepancy between an exact value and
some approximation to it. An approximation error can occur because:
The measurement of the data is not precise due to the instruments. (e.g., the accurate
reading of a piece of paper is 4.5 cm but since the ruler does not use decimals, you round it
to 5 cm.) or
Approximations
One Given some value v and its approximation approx, the absolute error is;
Where, the vertical bars denote the absolute value. If v =/= 0, the relative error is;
In other words, the absolute error is the magnitude of the difference between the exact
value and the approximation. The relative error is the absolute error divided by the
magnitude of the exact value. The percent error is the relative error expressed in terms of
per 100.
An error bound is an upper limit on the relative or absolute size of an approximation error.
Commonly distinguishes between the relative error and the absolute error.
There are two main ways to gauge how accurate is an estimated error of the actual
(not precisely know) value x:
The relative approximation error that’s usually defined as the ratio of the absolute error
and the actual value:
Since the actual value x is now known exactly, we cannot compute the exact values
of the corresponding errors. Often, however, we know the upper bounds on one of these
errors (or on both of them). Such an upper bound serves as a description of approximation
accuracy: we can say that we have an approximation with an accuracy of ±0.1, or with an
accuracy of 5%.
Both extrapolation and interpolation are useful methods to determine or estimate the
hypothetical values for an unknown variable based on the observation of other datapoints.
However, it can be hard to distinguish between these methods and understand how they
differ from each other.
linear interpolation
polynomial interpolation
spline interpolation
linear extrapolation
polynomial extrapolation
conic extrapolation
4. Numerical Integration:
• Newton method
Some calculations cannot be solved using algebra or other Mathematical methods. For this
we need to use numerical methods. Newton's method is one such method and allows us to
calculate the solution of f (x) = 0.
• Simpson Law
The other important ones cannot be assessed in terms of integration rules or basic
functions. Simpson's law is a numerical method that calculates the numerical value of a
direct combination.
• Trapezoidal law
A trapezoidal rule is a numerical method that calculates the numerical value of a direct
combination. The other important ones cannot be assessed in terms of integration rules or
basic functions.
5. Numerical Differentiation:
Methods:
Bracketing Methods
Interpolation
Iterative Methods
Combination of Methods
Roots of Polynomials
7. Linear Systems:
Order
The order of ordinary differential equations is defined to be the order of the highest
derivative that occurs in the equation. The general form of n-th order ODE is given as;
F(x, y,y’,….,yn ) = 0
Note that, y’ can be either dy/dx or dy/dt and yn can be either dny/dxn or dny/dtn.
An n-th order ordinary differential equations is linear if it can be written in the form;
Types
The ordinary differential equation is further classified into three types. They are:
Autonomous ODE
Linear ODE
Non-linear ODE
Partial differential equations are used to mathematically formulate, and thus aid the
solution of, physical and other problems involving functions of several variables, such as
the propagation of heat or sound, fluid flow, elasticity, electrostatics, electrodynamics, etc.
11. Convergence and Stability:
Stability:
Convergence:
Methods are said to be convergent because they move closer to the truth as the
computation progresses. Often, the rapid convergence may require more refined initial
guesses and more complex programming than a method with slower convergence.
Depending on your problem and the algorithm you use, it can have varying degree of
consistency, stability and convergence.
Some examples:
Bisection method has linear rate of convergence. It means, with every iteration, your
solution improves by one decimal places. But, to a greater extent, bisection method
becomes inconsistent for nonlinear equation whose roots are complex! It cannot yield a
complex number as the solution! It is a very stable method for equations whose at-least one
root is real!
https://abhyankar-ameya.medium.com/foundations-of-numerical-methods-the-pythonic-way-
d52be072ae20
https://www.engati.com/glossary/approximation-error
https://brilliant.org/wiki/taylor-series-
approximation/#:~:text=A%20Taylor%20series%20approximation%20uses,%E2%80%B2%20%E2%80%B2
%20(%20a%20)%202%20
https://www.techtarget.com/whatis/definition/extrapolation-and-
interpolation#:~:text=Extra%2D%20refers%20to%20%22in%20addition,value%20in%20between%20exis
ting%20values
https://www.geeksforgeeks.org/trapezoidal-rule/
https://amsi.org.au/ESA_Senior_Years/SeniorTopic3/3j/3j_2content_2.html#:~:text=Newton's%20meth
od%20for%20solving%20equations,requires%20calculus%2C%20in%20particular%20differentiation
https://mathworld.wolfram.com/NumericalDifferentiation.html
https://encyclopedia.pub/entry/29233
https://www.mathsisfun.com/algebra/systems-linear-equations.html
https://www.cs.toronto.edu/~enright/teaching/D37/Week4.pdf
https://byjus.com/maths/ordinary-differential-
equations/#:~:text=An%20ordinary%20differential%20equation%20(also,with%20one%20or%20more%
20derivatives
http://www.scholarpedia.org/article/Partial_differential_equation#:~:text=A%20partial%20differential%
20equation%20(or,respect%20to%20the%20independent%20variables