Siddhant CM File
Siddhant CM File
ES-251
SEMESTER: 3rd
Group: EEE-1
INDEX
MARKS
EXP
TITLE
NO. PROGRAM OUTPUT FILE VIVA SUBMISSION TOTAL
2 2 2 2 2 10
1 To understand and implement the
Bisection Method, a numerical
technique for finding the root of a real-
valued function within a given interval.
2 To calculate the roots of a given
polynomial using the Newton-Raphson
Method.
AIM:
To understand and implement the Bisection Method, a numerical technique for finding the root of a
real-valued function within a given interval.
THEORY:
Explained Let f be a continuous function defined on an interval [a, b] where f (a) and f (b) have
opposite signs. Bisection method is applicable for solving the equation f(x) =0 for a real variable x.At
each step, the interval is divided into two parts/halves by computing the midpoint, c= (a+b) 2, and the
value of f(c) at that point. Unless the root is c, there are two possibilities: f(a) and f(c) have opposite
signs and bracket a root ,f(c) and f(b) have opposite signs and bracket a root. One of the subintervals
is chosen as the new interval to be used in the next step. This process is carried out again and again
until the interval is sufficiently small. If f(a) and f(c) have opposite signs, then the value of b is
replaced by c. If f(b) and f(c) have opposite signs, then the value of a is replaced by c. In the case that
f(c) =0, c will be taken as the solution and the process stops.
ALGORITHM:
Step-1. Start of the program.
Step-2. Input the variable x1, x2 for the task.
Step-3. Check f(x1)*f(x2) <0
Step-4. If yes, proceed
Step-5. If no, exit and print error message.
Step-6. Repeat 7-10 if condition not satisfied
Step-7. X0=(x1+x2)/2
Step-8. If f(x0)*f(x1) <0
Step-9. X2=x0, Else: X1=x0
Step-10. Condition: If |(x1-x2)/x1)| < maximum possible error or f(x0) =0
CODE:
#include<iostream>
#include<cmath>
using namespace std;
float f(float x){
return x*x*x - x - 1;
}
int main(){
float a = 1.0;
float b = 3.0;
float c = 0.0;
cout << "The value of f(a) is : " << f(a) << endl;
cout << "The value of f(b) is : " << f(b) << endl;while(abs(f(c))>0.005){
c = (a+b)/2;
cout << "Value of f(c) : "<<f(c)<< endl;
if(f(c)>0){
b = c;
}
else{
a = c;
}
}
cout << endl << "The root for this equation is : "<<c<<endl;
return 0;
}
OUTPUT:
GRAPH:
RESULT:
The approximate root of the given function f(x) was hence found using the help of bisection method
EXPERIMENT – 2
Aim:
To calculate the roots of a given polynomial using the Newton-Raphson Method.
Theory:
The Newton-Raphson method is a way to find the root of a function by using its derivative.
The formulafor finding the guess is:
Xn = Xn-1 – f(Xn-1)/f'(Xn-1)
The Newton-Raphson method is based on the idea that a smooth function can be approximated by
a linear function near its root. The method converges quickly if the initial guess is close enough to
the root and the function is not too flat.
Algorithm:
Step-1. Start
Step-2. Define function as f(x)
Step-3. Define first derivative of f(x) as g(x)
Step-4. Input initial guess ‘x0’, tolerable error ‘e’ and maximum iterations ‘N’
Step-5. Initialise iteration counter i=1
Step-6. If g(x0) = 0, then print “Mathematical Error” and goto step-12, otherwise goto step-7
Step-7. Calculate x1 = x0 - f(x0)/g(x0)
Step-8. Increment iteration counter i = i+1
Step-9. If i >= N, then print “Not Convergent” and goto step-12, otherwise goto step-10
Step-10. If |f(x1)| > e, then set x0 = x1 and goto step-6, otherwise goto step-11
Step-11. Print root as x1
Step-12. Stop.
Code:
#include <iostream>
using namespace std;
float fx(float x){
return x*x*x - x - 1;
}
int main(){
float nearNum, xN; //nearNum stores the point around which the root is to be found and xN
stores the current root
int i = 0;
cout << "Enter the point near which the root is to be calculated : " ;
cin >> nearNum;
float prev
prev_xN = 0; //Variable to store consequetive xN values for comparision through different
iterations and for applying breaking condition
do{
xN = nearNum - (fx(nearNum))/(fdx(nearNum));
if(prev_xN == xN){
break;
}
cout << "Value of X" << i << " is : "<< xN << endl;
prev_xN = xN;
nearNum = xN;
i++;
}
while(i!=10);
return 0;
OUTPUT:
Graph:
Result: The approximate root of the given function f(x) was hence found using Newton-Raphson
method.
EXPERIMENT - 3
Aim: Program for finding roots of f(x)=0 by secant method.
Theory: The secant method is a numerical technique used to approximate the roots of
a real- valued function f(x). It is an iterative, open-method approach that doesn't require
knowledge ofthe derivative of the function. The secant method is particularly useful
when you have a good ini8al guess for the root but not the derivative, making it more
versatile than methods like the Newton-Raphson method.
ALGORITHM:
1. Decide two initial points x1 and x2 and required accuracy level E.
2. Compute f1 = f (x1) and f2 = f (x2)
3. Compute x3 = (f2 x1 – f1 x2) / (f2 – f1)
4. If |(x3- x2)/ x3| > E, then
set x1 = x2 and f1 = f2
set x2 = x3 and f2 = f(x3)
go to step 3
Else
set root = x3
print results
5.Stop.
Code:
#include <iostream>
using namespace
std;
return x*x*x - x - 1;
int main(){
float x1, x2, x0 ,k;
cout<<endl;
if(f(x1)*f(x2)<0){
x0 = secant(x1, x2);
endl;x1 = x2;
x2 = x0;
else if(f(x1)==0){
endl;k=1;
else if(f(x2)==0){
endl;k=1;
else{
cout << "Root does not lie between "<< x1<< "and
"<<x2<<endl;k=1;
if(k==0){
cout << "The root of the equation is : "<<x0<<endl;
return 0;
Output:
Graph:
Result:
The approximate root of the given function f(x) was hence found using secant method.
EXPERIMENT - 4
Theory:
Lagrange Interpolation is a way of finding the value of any function at any given point
when the function is not given. We use other points on the function to get the value of the
function at anyrequired point.
Algorithm:
1. Start
2. Read number of data (n)
3. Read data Xi and Yi for i=1 ton n
4. Read value of independent variables say xp whose corresponding value of
dependent sayyp is to be determined.
5. Initialise: yp = 0
6. For i = 1 to n
Set p = 1
For j =1 to
nIf i ≠ j
then
Calculate p = p * (xp - Xj)/(X -
Xj)End If
Next j
Calculate yp = yp + p * Yi
Next i
7. Display value of yp as interpolated value.
8. Stop
Code:
#include<iostream>
using namespace
std;
int main()
{
float
x[50];float
y[50];
float xp, yp=0,
p;int i,j,n;
cout<<"Enter number of data: ";
cin>>n;
cout<<"Enter data:"<< endl;
for(i=1;i<=n;i++)
{
cout<<"x["<< i<<"] =
";cin>>x[i];
cout<<"y["<< i<<"] =
";cin>>y[i];
}
cout<<"Enter interpolation point:
";cin>>xp;
for(i=1;i<=n;i++)
{
p=1;
for(j=1;j<=n;j++)
{
if(i!=j)
{
p = p* (xp - x[j])/(x[i] - x[j]);
}
}
yp = yp + p * y[i];
}
cout<< endl<<"Interpolated value at "<< xp<< " is "<< yp;
return 0;
}
Output:
Graph:
Theory:
Newton’s Divided Difference Formula eliminates the drawback of
recalculation and re-computation of interpolation coefficients by using
Newton’s general interpolation formula which uses “divided differences”.
Before going through thesource code for Newton Divided Difference in C,
here’s a brief explanation
of what divided differences are with the formula for divided differences.
The second divided difference is defined as: [x0, x1, x2] = ( [x1, x2] – [x0, x1]
)/(x2-x0). This goes on in similar fashion for the third,fourth and nth divided
differences.
Algorithm:
Step-1. Start.
Step-2. Read number of data(n)
Step-3. Read data Xi and Yi for i=1 to n
Step-4. Read value of independent variables say xp whose corresponding value of
dependent sayyp is to be determined
Step-5. Initialise
yp=0
Step-6. For i=1 to n
Set p=1
p[i] = ((y[i+1]-y[i]/x[i+j]-x[i]))
y[i]=p[i]
f1 = 1;
#include <iostream>
using namespace
std;
return result;
}
int main()
{int n;
dividedDifference(n, x, y,
f);double target;
cout << "Enter the x-value to
interpolate: ";cin >> target;
return 0;
Output:
Graph:
Theory: In numerical analysis, Trapezoidal method is a technique for evaluating definiteintegral. This
method is also known as Trapezoidal rule or Trapezium rule.
This method is based on Newton's Cote Quadrature Formula and Trapezoidal rule is
obtained whenwe put value of n = 1 in this formula.
Algorithm:
Step-1. Start
Step-2. Define function f(x)
Step-3. Read lower limit of integration, upper limit of integration and number of sub
interval.
Step-4. Calculate: step size = (upper limit - lower limit)/number of sub interval
Step-5. Set: integration value = f(lower limit) + f(upper
limit)
Step-6. Set: i = 1
Step-7. If i > number of sub interval then
gotoStep-8. Calculate: k = lower limit + i *
h
Step-9. Calculate: Integration value = Integration Value +
2* f(k)
Step-10. Increment i by 1 i.e. i = i+1 and go to step 7
Step-11. Calculate: Integration value = Integration value * step
size/2
Step-12. Display Integration value as required answer
Step-13. Stop.
Code:
#include
<iostream>
#include <cmath>
using namespace std;
double function(double
numSubIntervals;
stepSize; integrationValue += 2 *
function(k);
integrationValue *= stepSize / 2;
return integrationValue;
int main() {
double lowerLimit,
upperLimit;int
numSubIntervals;
return 0;
Output:
Graph:
Result: The integration of the function - (x)^(1/2) was found out to be 5.14626 in the given
interval 0 to 4 through this program.
EXPERIMENT – 7
Aim: Program for solving Numerical Integration by Simpson’s 1/3rd Rule.
Theory: Simpson’s rule is one of the numerical methods which is used to evaluate the definite
integral. Usually, to find the definite integral, we use the fundamental theorem of calculus, wherewe
have to apply the antiderivative techniques of integration.
Simpson’s 1/3rd rule is an extension of the trapezoidal rule in which the integrand is
approximatedby a second-order polynomial. Simpson rule can be derived from the various
way using Newton’s divided difference polynomial, Lagrange polynomial and the method
of coefficients. Simpson’s 1/3rule is defined by:
Algorithm:
Step-1. Start
Step-2. Define Function f(x)
Step-3. Input lower_limit, upper_limit, sub_interval
Step-4. Calculate: step_size = (lower_limit -
upper_limit)/sub_interval .
Step-5. Calculate: integration = f(lower_limit) + f(upper_limit)
Step-6. Set: i=1
Step-7. Loop While i<=
sub_interval Step-k= lower_limit
+ i * step_size
If i
mod 2 = 0
integration = integration + 2 * f(k)
Else
integration = integration + 4 * f(k)
End
Ifi =
i+1
Step-8. integration = integration *
step_size/3
Step-9. Print integration as result
Step-10. Stop
Code:
#include <iostream>
#include <cmath>
using namespace
std;
double function(double
x) {return pow(x, 0.5);
}
if (i % 2 == 0) {
integration += 2 * function(k);
} else {
integration += 4 * function(k);
}
}
integration *= stepSize / 3;
return integration;
}
int main() {
double lowerLimit,
upperLimit;int
numSubIntervals;
return 0;
}
Output:
Graph:
Result: The integration of the function - (x)^(1/2) was found out to be 5.9713 in the given interval0 to 4
through this program
EXPERIMENT - 8
Theory: Simpson’s rule is one of the numerical methods which is used to evaluate the
definite integral. Usually, to find the definite integral, we use the fundamental theorem of
calculus, wherewe have to apply the antiderivative techniques of integration.
Simpson’s 3/8th rule is completely based on the cubic interpolation rather than the
quadraticinterpolation.
This rule is more accurate than the standard method, as it uses one more functional value.
For 3/8rule, the composite Simpson’s 3/8 rule also exists which is similar to the generalised
form. The 3/8rule is known as Simpson’s second rule of integration.
Algorithm:
Step-1. Start
Step-2. Define function f(x)
Step-3. Read lower limit of integration, upper limit of integration and number of sub
interval
Step-4. Calculate: step size = (upper limit – lower limit)/number of sub interval
Step-5. Set: integration value = f(lower limit) + f(upper
limit)
Step-6. Set: I = 1
Step-7. If I > number of sub interval then
goto
Step-8. Calculate: k = lower limit + I * h
Step-9. If I mod 3 =0 then
Integration value = Integration Value + 2*
f(k)Otherwise
Integration Value = Integration Value + 3 *
f(k)End If
Step-10. Increment I by 1 i.e. I = i+1 and go to step 7
Step-11. Calculate: Integration value = Integration value * step
size*3/8
Step-12. Display Integration value as required answer
Step-13. Stop
Code:
#include <iostream>
#include <cmath>
using namespace
std;
double function(double
x) {return pow(x, 0.5);
}
if (i % 3 == 0) {
integration += 2 * function(k);
} else {
integration += 3 * function(k);
}
}
integration *= stepSize * 3 / 8;
return integration;
}
int main() {
double lowerLimit,
upperLimit;int
numSubIntervals;
<< endl;
return 0;
}
Output:
Graph:
Result: The integration of the function - (x)^(1/2) was found out to be 5.97072 in the given
interval0 to 4 through this program.
EXPERIMENT - 9
Aim: To find the inverse of a system of linear equations using Gauss-Jordan’s Method.
Theory: The Gauss-Jordan elimination method is a technique used to find the inverse of a
systemof linear equations or the inverse of a matrix. The process involves transforming
an augmented matrix (a matrix that includes both the coefficient matrix and the constants
vector) into a form where the original matrix becomes the identity matrix and the inverse
can be read directly.
In order to find the inverse of the matrix following steps need to be followed:
1. Form the augmented matrix by the identity matrix.
2. Perform the row reduction operation on this augmented matrix to generate a row
reducedechelon form of the matrix.
3. The following row operations are performed on augmented matrix when required:
a. Interchange any two row.
b. Multiply each element of row by a non-zero integer.
c. Replace a row by the sum of itself and a constant multiple of another row of the matrix.
Algorithm:
Step-1. Start
Step-2. Read Number of Unknowns: n
Step-3. Read Augmented Matrix (A) of n by n+1 Size
Step-4. Transform Augmented Matrix (A) to Diagonal Matrix by Row
operations.
Step-5. Obtain Solution by Making All Diagonal Elements to 1.
Step-6. Display
Result.
Step-7. Stop
Code:
#include
<iostream>
#include
<iomanip>
#include <vector>
" ";
void gaussJordan(vector<vector<double>>&
double divisor =
matrix[i][i];for (int j = 0; j
<= n; j++) {
matrix[i][j] /= divisor;
k++) {if (k != i) {
double factor =
matrix[k][i];for (int j = 0; j
<= n; j++) {
}
}
int main() {
int n;
";cin >> n;
cout << "Enter the augmented matrix (size " << n << "x" << n + 1 << "):" <<
gaussJordan(augmentedMatrix);
endl;printMatrix(augmentedMatrix);
return 0;
}
Output:
Result: The inverse of a system of linear equations using Gauss-Jordan’s Method was
found outthrough this program.
EXPERIMENT - 10
Theory:
The Power Method is an iterative algorithm used to find the dominant eigenvalue and its
corresponding eigenvector of a square matrix. The dominant eigenvalue is the one with the
largestmagnitude. The method is based on the principle that, with each iteration, the
current vector is multiplied by the matrix and normalised to converge toward the dominant
eigenvector.
The Power Method is effective for finding the dominant eigenvalue and eigenvector of a
matrix, especially when the matrix is large and sparse. However, it is important to note that
the method may not converge or may converge to a different eigenvalue if the matrix has
repeated eigenvaluesor if the initial vector is poorly chosen.
Algorithm:
1. Start
2. Read Order of Matrix (n) and Tolerable Error €
3. Read Matrix A of Size n x n
4. Read Initial Guess Vector X of Size n x 1
5. Initialise: Lambda_Old = 1
6. Multiply: X_NEW = A * X
7. Replace X by X_NEW
8. Find Largest Element (Lamda_New) by Magnitude from X_NEW
9. NormaliSe or Divide X by Lamda_New
10. Display Lamda_New and X
11. If |Lambda_Old – Lamda_New| > e then
set Lambda_Old = Lamda_New and goto Step-6, otherwise goto Step-12
12. Stop
Code:
#include<iostream
#include <cmath>
#include <vector>
using namespace std;
int n = A.size();
vector<double> result(n,
0.0);
j++) {
return result;
double magnitude =
maxMagnitude) {
maxMagnitude =
magnitude;index = i;
}
return index;
int main()
{int n;
double tolerance;
cout << "Enter the matrix A of size " << n << "x" << n << ":" <<
cout << "Enter the initial guess vector X of size " << n << "x1:"
while (true) {
vector<double> XNew = matrixVectorMultiply(A, X);
X = XNew;
i++)X[i] /=
lambdaNew;
endl;
tolerance)lambdaOld = lambdaNew;
else
break;
return 0;
}
Output:
Result: The Power Method is hence successfully implemented to find Eigen Values.
EXPERIMENT - 11
Theory:
The Runge-Kutta method is a numerical technique used for solving ordinary differential
equations (ODEs) or systems of ODEs. It's a popular method due to its simplicity and
efficiency. The basic ideais to iteratively estimate the solution at discrete points,
progressing step by step. The most commonly used version is the fourth-order Runge-
Kutta method (RK4).
The Runge-Kutta method provides the approximate value of y for a given point x. Only
the first order ODEs can be solved using the Runge Kutta RK4 method. The formula for
the fourth-orderRunge-Kutta method is given by:
Here,
k1 = hf(x0, y0)
k2 = hf[x0 + (½)h, y0 + (½)k1]
k3 = hf[x0 + (½)h, y0 +
(½)k2]k4 = hf(x0 + h, y0 +
k3)
Algorithm:
Step-1. Start
Step-2. Define function f(x,y)
Step-3. Read values of initial condition(x0 and y0), number of steps (n) and calculation
point (xn)
Step-4. Calculate step size (h) = (xn – x0)/n
Step-5. Set
i=0
Step-6. Loop
While (I < n)
k1 = h * f(x0, y0)
k2 = h * f(x0+h/2,
y0+k1/2) k3 = h *
f(x0+h/2, y0+k2/2)k4 = h
* f(x0+h, y0+k3)
k=
(k1+2*k2+2*k3+k4)/6
yn = y0 + k
I=I+1
x0 = x0 +
hy0 = yn
Step-7. Display yn as
result
Step-8. Stop.
Code:
#include <iostream>
using namespace
std;
int i = 0;
while (i < n)
{
double k1 = h * f(x0, y0);
double k2 = h * f(x0 + h/2, y0 +
k1/2);double k3 = h * f(x0 + h/2, y0
+ k2/2);double k4 = h * f(x0 + h, y0
+ k3);
double yn = y0 +
k;i = i + 1;
x0 = x0 +
h;y0 = yn;
}
return y0;
}
int main() {
double x0, y0,
xn;int n;
cout << "The solution at x = " << xn << " is y = " << result << std::endl;
return 0;
}
Output:
Result: An ODE - x + y + 12 was solved and hence the Runge-Kutta’s method was
successfullyimplemented through this program