0% found this document useful (0 votes)
74 views

Lecture24 26

This document discusses discrete and continuous least squares approximation methods. It covers finding the best fit linear and polynomial functions to minimize the least squares error of a given data set. The discrete case involves minimizing the error over a finite number of data points, while the continuous case minimizes the error over a given function domain. Normal equations are derived and solved using matrices to find the optimal parameter values for the fitting functions.

Uploaded by

akshita anusuri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

Lecture24 26

This document discusses discrete and continuous least squares approximation methods. It covers finding the best fit linear and polynomial functions to minimize the least squares error of a given data set. The discrete case involves minimizing the error over a finite number of data points, while the continuous case minimizes the error over a given function domain. Normal equations are derived and solved using matrices to find the optimal parameter values for the fitting functions.

Uploaded by

akshita anusuri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

90 function approximations

4.2.1 Discrete Least Squares Approximation

The objective is to determine the best fit function of the form


f ( x; a) for a given set of data points ( xi , yi ), i = 0, 1, . . . , m,
where a = ( a0 , a1 , . . . , an ) is a vector of parameters. The
discrete least squares problem is to find the value of a that
minimizes the least square error,
m
E ( a0 , a1 , . . . , a n ) = E ( a ) = Â ( yi f ( xi ; a))2 .
i =0

Linear-fit

The simplest example of the best fit function is the linear


curve f ( x; a0 , a1 ) = a0 + a1 x. The least square error in this
case is
m
E ( a0 , a1 ) = Â ( yi ( a0 + a1 xi ))2 .
i =0

For minimum E,
m
∂E
= 2 Â ( yi ( a0 + a1 xi )) = 0,
∂a0 i =0
m
∂E
= 2 Â ( yi ( a0 + a1 xi )) xi = 0.
∂a1 i =0

Simplifying, we get the following linear system


m m
ma0 + a1 Â xi = Â yi ,
i =0 i =0
m m m
a0 Â xi + a1 Â xi2 = Â xi yi .
i =0 i =0 i =0

Finally, a0 and a1 can be found by solving the above system.


4.2 least squares methods 91

Polynomial-fit

To go a little far, we consider the polynomials as the fitting


curve, that is,

f ( x; a) = Pn ( x ) = a0 + a1 x + · · · + an x n .

The corresponding least square error is

m
2
E ( a0 , a1 , . . . , a n ) = Â ( yi Pn ( xi )) .
i =0

For minimum E,

m
∂E
= 2 Â ( yi Pn ( xi )) = 0
∂a0 i =0
m
∂E
= 2 Â ( yi Pn ( xi )) xi = 0
∂a1 i =0
..
.
m
∂E
= 2 Â ( yi Pn ( xi )) xin = 0
∂an i =0

Simplifying, we get the following linear system

8
>
>
> ma0 + a1 Âim=0 xi + · · · + an Âim=0 xin = Âim=0 yi
>
>
< a 0  m x i + a 1  m x 2 + · · · + a n  m x n +1 =  m x i y i
i =0 i =0 i i =0 i i =0
> ..
>
> .
>
>
: a Âm x n + a Âm x n+1 + · · · + a Âm x2n = Âm x n y
0 i =0 i 1 i =0 i n i =0 i i =0 i i
(31)
92 function approximations

The above equations are also known as normal equations. Let


us consider the following matrices
2 3 2 3 2 3
1 x0 x02 · · · x0n y0 a0
61 x1 x 2 · · · x n 7 6 y1 7 6 a1 7
6 1 17 6 7 6 7
A = 6.
. .. .. . . .. 7 , b = 6 .. 7 , a = 6 .. 7 .
4. . . . . 5 4 . 5 4.5
2
1 xm xm n
· · · xm ym an

Then the least square error can be expressed as E = k Aa bk22


and the normal equations can be written in the following ma-
trix form

A T Aa = A T b. (32)

Finally, we find a by solving the normal equations (31) or


(32).
Note 4.1. In the problem section of this chapter, a range of non-
polynomial fitting problems are given as exercise, some of which
have been addressed in class. It is suggested that you attempt to
fit curves of the exponential form using the equations aebx , ab x ,
and ax b , with a hint to use log10 .

4.2.2 Least Squares Approximation of Functions

Suppose f 2 C [ a, b]. In this section we find a polyno-


mial Pn ( x ) of degree at most n to approximate f such that
Rb
a
( f ( x ) Pn ( x ))2 dx is a minimum. The least square error
in this case is
Z b
E ( a0 , a1 , . . . , a n ) = ( f (x) Pn ( x ))2 dx
a

where

Pn ( x ) = a0 + a1 x + · · · + an x n .
4.2 least squares methods 93

For minimum E,

Z b
∂E
= 2 (f Pn ( x )) = 0
∂a0 a
Z b
∂E
= 2 (f Pn ( x )) x = 0
∂a1 a
..
.
Z b
∂E
= 2 (f Pn ( x )) x n = 0
∂an a

Simplifying, we get the following linear system (normal


equations)

8 R Rb Rb Rb
b
>
>
> a0 a dx + a1 a x dx + · · · + an a x n dx = a y dx
>
> R R R R
< a0 b x dx + a1 b x2 dx + · · · + an b x n+1 dx = b xy dx
a a a a
> .
..
>
>
>
: a R b x n dx + a R b x n+1 dx + · · · + a R b x2n dx = R b x n y dx
>
0 a 1 a n a a
(33)

Finally, we find a by solving the above equations (33).

Example 4.1. Find the least square approximating polynomial of


degree 2 for the function f ( x ) = sin(px ) on the interval [0, 1].

Using (33) for this problem we have

8 R R1 R1 2 R1
> 1
< a0 R0 dx + a1 0Rx dx + a2 0 Rx dx = 0 Rsin(px ) dx
>
1 1 1 1
a0 0 x dx + a1 0 x2 dx + a2 0 x3 dx = 0 x sin(px ) dx
>
: a R 1 x2 dx + a R 1 x3 dx + a R 1 x4 dx = R 1 x2 sin(px ) dx
>
0 0 1 0 2 0 0
94 function approximations

Evaluating the integrals we get,

2 3 2 2 3
1 1
1 2 3
6
61 2 37 a0 6 p 7
6 1 17 6
7 4 a1 5 = 6 1 7
7
62 3 47 6 p 7
41 1 5 a2 4 5
1 p2 4
3 4 5 p3

Solving we have, a0 = 0.0505, a1 = 4.1225, a2 = 4.1225.


You can use the following MATLAB code to get these val-
ues:

A = [1 1/2 1/3; 1/2 1/3 1/4; 1/3 1/4 1/5];


b = [2/pi; 1/pi; (pi^2-4)/pi^3];
a = A \ b

Note 4.2. The coefficient matrix in the normal equation is the


Hilbert matrix, which has a very high condition number, as we
have previously discussed in this course. You can even calculate
the condition number in MATLAB using ‘cond(A)’ which will
return 524.0568.

To avoid the numerical instability associated with inverting


Hilbert-type matrices, we use a different approach based on
orthogonal bases of the polynomial space.
4.2 least squares methods 95

� 1. Discrete least squares and least squares of


functions are indeed different techniques. Dis-
crete least squares is used when dealing with
a finite number of data points, while least
squares of functions are used when dealing
with continuous functions.

2. Cholesky factorization is a commonly used


method for solving normal equations, but it
may not always be the most efficient method.
Other methods, such as QR factorization and
singular value decomposition (SVD), can be
more computationally efficient in some cases.
Our syllabus does not include these methods.

Hi programmers, here’s a challenge for you:


� Problem 4.1. Write a MATLAB code to find the pa-
rameters of the least square polynomial fit of a given
continuous function f defined in [ a, b]. Try to mini-
mize the use of inbuilt functions and implement your
own Cholesky method to solve the normal equations.
Additionally, consider providing a warning when the
condition number of the coefficient matrix in the normal
equations is high.
96 function approximations

problems

1. The following eight data points show the relationship


between the number of fishermen and the amount of
fish (in thousand pounds) they can catch a day.
Number of 4 5 9 10 12 14 18 22
Fishermen
Fish 7 8 9 12 15 20 26 35
Caught
Using the least-squares method, compute a linear poly-
nomial that fits the above data.

2. Determine the least squares approximation of the


type ax2 + bx + c, to the function 2x at the points
x = 0, 1, 2, 3, 4.

3. A survey was conducted to find the relationship be-


tween the number of hours studied per week and the
final exam scores of students in a certain course. The
following data was collected:
Hours Studied 2 3 4 5 6
Exam Score 55 70 74 89 93
Find the equation of the line of best fit for the data,
and use it to predict the exam score of a student who
studies for 7 hours per week. DoComment on the
result.

4. Obtain an approximation in the sense of the principle


of least squares in the form of a polynomial of the
degree 2 to the function 1+1x2 in the range 1  x  1.

5. Use the method of least squares to fit the curve


c0 c
y= + p1
x x
4.2 least squares methods 97

to the table of values:


x 0.1 0.2 0.4 0.5 1 2
y 21 11 7 6 5 6

6. We are given the following values of a function f of


the variable t :
t 0.1 0.2 0.3 0.4
f 0.76 0.58 0.44 0.35
Obtain a least squares fit of the form f = ae 3t + be 2t .

7. A physicist wants to approximate the following data :


x 0.0 0.5 1.0 2.0
f ( x ) 0.00 0.57 1.46 5.05
using a function aebx + c. He believes that b ⇡ 1.
a) Compute the values of a and c that give the best
least squares approximation assuming b = 1.
b) Use these values of a and c to obtain a better
value of b.

8. Three disease-carrying organisms decay exponentially


in lake water according to the following model:

1.5t 0.3t 0.05t


p(t) = Ae + Be + Ce .

Use regression to estimate the initial population of


each organism (A, B, and C) given the following mea-
surements:
t, hr 0.5 1 2 3 4 5 6 7 9
p(t) 6.0 4.4 3.2 2.7 2.2 1.9 1.7 1.4 1.1

9. An investigator has reported the data tabulated below.


x 1 2 3 4 5
f ( x ) 0.5 2 2.9 3.5 4
98 function approximations

It is known that such data can be modelled by the


following equation
✓ ◆
y b
x = exp
a

where a and b are parameters. Use non-linear regres-


sion to determine a and b. Based on your analysis
predict y at x = 2.6.

10. Consider the following approximating polynomial :


Determine mink1 x g( x )k, where g( x ) = ax + bx2
and a and b are real numbers.
Determine a best approximation g if
Z 1
kfk = f ( x )2 dx.
0

Is the approximation unique?

You might also like