Lab 1
Lab 1
1
Newton-Raphson Method
f(x)
f(xi)
x f x
i, i
xi 1 = xi -
f(xi )
f (xi )
f(xi-1)
xi+2 xi+1 xi X
f(x)
AB
f(xi) B tan(
AC
f ( xi )
f ' ( xi )
xi xi 1
C A X f ( xi )
xi 1 xi
f ( xi )
xi+1 xi
xi 1- xi
a = 100
xi 1
4
http://numericalmethods.eng.usf.edu
Step 5
5 http://numericalmethods.eng.usf.edu
Basis of Bisection Method
x
x
xu
Figure 1 At least one root exists between the two points if the function is
real, continuous, and changes sign.
6
Basis of Bisection Method
f(x)
x x
xu
Figure 2 If functionf x does not change sign between two points, roots of
the equationf x 0may still exist between the two points.
7
Secant Method – Derivation
E D A
On rearranging, the secant
xi+1 xi-1 xi
X
method is given as
f ( xi )( xi xi 1 )
Figure 2 Geometrical representation of xi 1 xi
the Secant method. f ( xi ) f ( xi 1 )
8
Step 1
Calculate the next estimate of the root from two initial guesses
f ( xi )( xi xi 1 )
xi 1 xi
f ( xi ) f ( xi 1 )
Find the absolute relative approximate error
xi 1- xi
a = 100
xi 1
9
Step 2
10
%The Newton Raphson Method
clc;
close all;
clear all;
syms x;
f=x^2-5*x+4 %Enter the Function here
g=diff(f); %The Derivative of the Function
n=input('Enter the number of decimal places:');
epsilon = 5*10^-(n+1)
x0 = input('Enter the intial approximation:');
for i=1:100
f0=vpa(subs(f,x,x0)); %Calculating the value of function
at x0
f0_der=vpa(subs(g,x,x0)); %Calculating the value of
function derivative at x0
y=x0-f0/f0_der; % The Formula
err=abs(y-x0);
if err<epsilon %checking the amount of error at each iteration
break
end
x0=y;
end
y = y - rem(y,10^-n); %Displaying upto
required decimal places
fprintf('The Root is : %f \n',y);
fprintf('No. of Iterations : %d\n',i);
Linear Regression
What is Regression?
What is regression? Given n data points ( x1 , y1 ), ( x2 , y2 ),......, ( xn , yn )
best fit y f (x ) to the data.
y
Ei yi f ( xi ) ( xn , yn )
( xi , yi )
y f (x )
( x1 , y1 )
x
15
Linear Regression-Criterion#1
Given n data points ( x1 , y1 ), ( x2 , y2 ),......, ( xn , yn ) best fit y a0 a1 x to the data.
n
Does minimizing Ei work as a criterion?
i 1
( xi , yi )
Ei yi a0 a1 xi ( xn , yn )
( x2 , y2 )
( x3 , y3 )
y a0 a1 x
( x1 , y1 )
x
Figure. Linear regression of y vs x data showing residuals at a typical point, xi .
16
Example for Criterion#1
Example: Given the data points (2,4), (3,6), (2,6) and (3,8), best fit the
data to a straight line using Criterion#1
n
Minimize Ei
i 1
8
x y
6
2.0 4.0
y
3.0 6.0
2
2.0 6.0 0
0 1 2 3 4
3.0 8.0 x
y
4
2.0 6.0 4.0 2.0
2
3.0 8.0 8.0 0.0
0
0 1 2 3 4
x
4
Ei 0
i 1
18
Linear Regression – Criterion #1
4
Ei 0 for regression models of y=4x-4
i 1
The sum of the residuals is minimized, in this case it is zero, but the regression
model is not unique.
Hence the criterion of minimizing the sum of the residuals is a bad criterion.
10
6
y
0
0 1 2 3 4
x
19
Linear Regression-Criterion#2
n
Will minimizing i | work any better?
| E
i 1
2
10
x y ypredicted E2 = y - ypredicted
8
2.0 4.0 4.0 0.0 6
y
3.0 6.0 8.0 -2.0 4
20
Least Squares Criterion
The least squares criterion minimizes the sum of the square of the
residuals in the model, and also produces a unique line.
2
Sr Ei yi a0 a1 xi
n 2 n
i 1 i 1
xi , yi
Ei yi a0 a1 xi xn , y n
x2 , y2
x3 , y3
y a0 a1 x
x1 , y1
21
Finding Constants of Linear Model
Minimize the sum of the square of the residuals:
2
Sr Ei yi a0 a1 xi
n 2 n
i 1 i 1
S r n
2 yi a0 a1 xi xi 0
a1 i 1
giving
n n n
a a x y
i 1
0
i 1
1 i
i 1
i
n n n
a x a x yi xi
2
0 i 1 i
i 1 i 1 i 1
22
Finding Constants of Linear Model
Solving for a0 and a1
n n n
n x i y i x i y i
i 1 i 1 i 1
a1 2
n
n
n x i2 x i
i 1 i 1
and
n n n n
x y
i 1
2
i
i 1
i xi xi y i
i 1 i 1
a0 2
n
n
n x 2
i x i
i 1 i 1
a0 y a1 x
23
Linear Regression (special case)
Given
y a1 x
n n n
n xi y i xi y i
i 1 i 1 i 1
a1 2
n
n
n x xi
2
i
i 1 i 1
Is this correct? 24
Linear Regression (special case cont.)
i yi a1 xi
Sum of square of residuals
n
Sr i
2
i 1
n 2
yi a1 xi
i 1
25
Linear Regression (special case cont.)
n
2 yi xi 2a1 xi
2
i 1
dS r
0
da1
gives
n
x y i i
a1 i 1
n
x
i 1
2
i
26
Linear Regression (special case cont.)
x y i i
a1 i 1
n
i
x 2
i 1
n
dS r
2 yi xi 2a1 xi
2
da1 i 1
d 2Sr n
2 xi 0
2
2
da1 i 1
x y i i
a1 i 1
n
x
i 1
2
i
27
Linear Regression (special case cont.)
Is this local minima of Sr an absolute minimum of S r?
Sr
a1
28
Interpolation
29
• Problem Statement. Suppose that we want to determine the
coefficients of the parabola,
f (x) = p1x2 + p2x + p3, that passes through the last three density
values from:
x1 = 300 f (x1) = 0.616
x2 = 400 f (x2) = 0.525
x3 = 500 f (x3) = 0.457
30
31