0% found this document useful (0 votes)
15 views11 pages

34900323048_SayanSarkarECE

The document discusses the importance of numerical methods over analytic methods for solving mathematical problems, highlighting their ability to handle complex equations and large datasets. It covers various approximation techniques, types of errors in numerical computation, and specific methods like Newton's and Lagrange's interpolation. Additionally, it explains numerical integration methods such as the Trapezoidal Rule and Simpson’s 1/3 Rule, along with their advantages and disadvantages.

Uploaded by

sayanpub2020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views11 pages

34900323048_SayanSarkarECE

The document discusses the importance of numerical methods over analytic methods for solving mathematical problems, highlighting their ability to handle complex equations and large datasets. It covers various approximation techniques, types of errors in numerical computation, and specific methods like Newton's and Lagrange's interpolation. Additionally, it explains numerical integration methods such as the Trapezoidal Rule and Simpson’s 1/3 Rule, along with their advantages and disadvantages.

Uploaded by

sayanpub2020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

COOCHBEHAR GOVERNMENT

ENGINEERING COLLEGE
Department of Electronics & Communication Engineering

CA - 1

Name – Sayan Sarkar

Roll No. – 34900323048

Subject Code = BS-M401 Year - 2 nd

Subject Name – Numerical Methods


1. Why do we study Numerical Methods instead of Analytic Methods?
Numerical methods are techniques used to find approximate solutions to mathematical problems,
especially when analytical (exact) solutions are difficult or impossible to obtain. We study
numerical methods instead of analytic methods for the following reasons:
When Exact Solutions are Difficult or Impossible
• Many mathematical problems, such as nonlinear equations, differential equations, and
integrals, do not have exact solutions in a closed-form expression.

• Examples:

o Solving ex = x2 analytically is difficult, but numerical methods like the


NewtonRaphson method can provide an approximate solution.

o Integrals such as ∫ e−x2dx do not have elementary solutions but can be


approximated using numerical integration methods like the Trapezoidal or Simpson’s
rule.
Handling Large and Complex Problems
• In engineering, physics, and real-world applications, many problems involve large datasets
and complex equations that cannot be solved using simple algebraic techniques.

• Numerical methods allow the computation of large systems of equations efficiently.


Use of Computers
• Computers can handle numerical computations much faster than symbolic (analytical)
solutions.

• Numerical methods provide algorithms that can be implemented in programming


languages like Python, MATLAB, or C++ to solve complex mathematical problems.
Approximate Solutions are Sufficient
• In practical applications, an approximate solution with a small error margin is often
sufficient.

• Example: Weather predictions, structural engineering simulations, and financial models


rely on numerical methods for predictions rather than exact solutions.
Numerical Stability and Flexibility
• Some problems are sensitive to small changes in input values (e.g., chaotic systems in
physics).

• Numerical methods provide stable and controlled approaches to find solutions with desired
accuracy.
2. Approximation in Numerical Computation (Short Note)
Approximation in numerical computation refers to the process of finding an estimated solution to
a mathematical problem when an exact solution is difficult to obtain. This is done using different
numerical techniques that balance accuracy and computational efficiency.
Types of Approximations
1. Truncation Approximation:

o Occurs when infinite processes (like infinite series) are cut off after a finite number
of terms.

o Example: Approximating sinx using the Taylor series expansion and keeping only the
first few terms.

2. Round-off Approximation:

o Due to the finite precision of computers, floating-point arithmetic introduces small


errors.

o Example: Storing π as 3.14159 instead of its infinite decimal expansion.

3. Interpolation and Extrapolation:

o Approximating unknown values using known data points. o Example: Using

Lagrange interpolation to estimate values between measured points.

4. Numerical Integration and Differentiation:

o When functions are complex, numerical methods approximate derivatives and


integrals. o Example: Simpson’s Rule is used to approximate definite integrals.

Importance of Approximation
• Helps solve real-world problems in engineering, physics, and finance where exact solutions
are not feasible.

• Allows faster computation using iterative methods like the Newton-Raphson method for
root-finding.

• Ensures better efficiency in simulations, optimizations, and artificial intelligence


applications.
3. Truncation and Rounding Errors (With Example)
Errors in numerical computations occur due to approximations and the limitations of representing
numbers in a computer. Two major types of such errors are truncation errors and rounding
errors.
Truncation Error
• This error arises when an infinite series or iterative process is cut off after a finite number
of terms.

• Example: Approximating the function ex using a Taylor series expansion: ex = 1 + x + x22!


+ x33! + ⋯

ex x2 x3

• If we approximate ex by only the first three terms: ex ≈ 1 + x + x2/2!


ex x2

The difference between the actual value and the approximation is the truncation error.
Rounding Error
• This error occurs due to the finite precision of representing numbers in a computer.

• Example: The number π is an irrational number, but in computers, it is stored as 3.14159,


leading to rounding errors in calculations involving π.

4. Fixed and Floating Point and Propagation of Errors (With Example)


Fixed-Point Representation
• In fixed-point representation, numbers are stored with a fixed number of decimal places.

• Example: If a system stores numbers with 2 decimal places, then

o 3.14159 is stored as 3.14 (causing truncation error).

Floating-Point Representation
• In floating-point representation, numbers are stored in the form: x = m × 10e where m is
the mantissa and e is the exponent.

• Example: The number 0.0000123 can be written as 1.23 × 10⁻⁵ in floating-point format.
Propagation of Errors
• When numerical errors accumulate during calculations, they can propagate and
significantly affect results.

• Example:

o Suppose we compute A = 3.14 and B = 2.71, but due to rounding errors, we use A
= 3.1 and B = 2.7.

o The product A × B should be 3.14 × 2.71 = 8.5094, o But using approximations,

we get 3.1 × 2.7 = 8.37,

o This difference propagates in further calculations, leading to inaccurate results.

5. Weierstrass Approximation Theorem


The Weierstrass Approximation Theorem states that:
Any continuous function f(x) defined on a closed interval [a, b] can be approximated as
closely as desired by a polynomial function P(x).
Importance:
• This theorem is fundamental in numerical analysis and provides a basis for polynomial
interpolation methods.

• It ensures that polynomials can approximate complex functions efficiently.


Example:

Consider the function f(x) = sinx on the interval [0, π].

• Using Weierstrass theorem, we can approximate sinx using polynomials:

P(x) = x − x33! + x5/5! −⋯

P(x) = x − x33! + x55! −⋯

o The more terms we take, the better the approximation.


6. Advantages and Disadvantages of Newton's and Lagrange's
Interpolation

Newton’s Interpolation:
Newton’s Interpolation is based on divided differences and provides an efficient way to compute
interpolating polynomials.

Advantages:
• It is more computationally efficient when adding new points.

• Reduces computational effort for higher-degree polynomials.

• Works well when data points are unequally spaced.


Disadvantages:
• Requires recalculating divided differences for each new data point.

• Susceptible to numerical instability for higher-degree polynomials.

Lagrange’s Interpolation:
Lagrange’s Interpolation constructs a polynomial by using Lagrange basis polynomials.
Advantages:
• Simple and straightforward to implement.

• No need to recalculate coefficients when adding new points.


Disadvantages:
• Computationally expensive for large data sets.

• Requires recompilation of the entire polynomial if a new point is added.

• Susceptible to Runge’s phenomenon, where oscillations occur for high-degree polynomials.


7. Graphical Explanation of Trapezoidal and Simpson’s 1/3 Rule in
Numerical Integration
Numerical integration is used to approximate definite integrals when an analytical solution is
difficult or impossible. Two common numerical integration techniques are Trapezoidal Rule and
Simpson’s 1/3 Rule.

Trapezoidal Rule:
• The Trapezoidal Rule approximates the area under a curve by dividing the integral into
trapezoids instead of rectangles.

• The formula for the Trapezoidal Rule for an interval [a, b] with one subinterval is:
b b−a

∫a f (x)dx ≈ 2 [f(a) + f(b)]


• If the interval is divided into n equal subintervals, the formula extends to: ∫ab f (x)dx ≈ h2

n−1 f (xi) + f(xn)] where h = b−an .

[f(x0) + 2 ∑i=1

Graphical Explanation:

• The function f(x)f(x) is approximated by a straight line connecting consecutive points.

• This method is accurate for linear functions but introduces errors for curved functions.
Simpson’s 1/3 Rule:
• Simpson’s 1/3 Rule improves accuracy by approximating the curve using a parabolic
function instead of straight-line segments.

• The formula for a single interval is: b


b−a a+b

• For n subintervals, where n is even: b h odd i f (xi) + 2 ∑even i f (xi)


+ f(xn)]

Graphical Explanation:

• Instead of straight lines, this method fits parabolas through consecutive points.

• It provides a higher accuracy than the Trapezoidal Rule for smooth functions.

Comparison:
MethodAccuracy Complexity Suitable for

Trapezoidal Rule
Low Simple Linear functions or rough estimates

Simpson’s 1/3 Rule


High Slightly more complex Smooth functions
8. Solve an Example Using Newton’s Forward, Lagrange’s, and
Newton’s Divided Difference Interpolation

You might also like