0% found this document useful (0 votes)
12 views

CF problem (1)

The assignment involves fitting a linear model and an exponential curve to two sets of data points using the least squares method. The linear fit results in the equation y = 1.1 + 0.8x, while the exponential fit yields y = 1.469 e^(0.503x). The document outlines the algorithms and mathematical principles behind the least squares method for both types of fittings.

Uploaded by

mailme9051
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

CF problem (1)

The assignment involves fitting a linear model and an exponential curve to two sets of data points using the least squares method. The linear fit results in the equation y = 1.1 + 0.8x, while the exponential fit yields y = 1.469 e^(0.503x). The document outlines the algorithms and mathematical principles behind the least squares method for both types of fittings.

Uploaded by

mailme9051
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Understanding the Problem

This assignment asks you to:

1. Fit a straight line (y = a + bx) to the first set of data points


2. Fit an exponential curve (y = ae^(bx)) to the second set of data points

Both fittings should be done using the least squares method, which is a mathematical approach to
find the best-fitting line or curve by minimizing the sum of squared differences between
observed and predicted values.

Data Sets

First data set (for linear fit):

x 0 1 2 3 4
y 1.1 1.9 2.7 3.5 4.3

Second data set (for exponential fit):

X 2 4 6 8 10
Y 4 11 30 82 223

The assignment already provides the answers:

 Linear fit: y = 1.1 + 0.8x


 Exponential fit: y = 1.469 e^(0.503x)

Explanation of the Program/ Algorithm


This Algorithm uses the least squares method to fit both a straight line and an exponential curve
to the given data points.

How the Program Works:

1. Linear Fitting (y = a + bx):


o We use the formulas:
 b = (n∑xy - ∑x∑y) / (n∑x² - (∑x)²)
 a = (∑y - b∑x) / n
o This calculates the slope (b) and y-intercept (a) that minimize the squared error.
2. Exponential Fitting (y = ae^(bx)):
o We can't directly apply least squares to an exponential curve.
o Instead, we transform it by taking the natural logarithm: ln(y) = ln(a) + bx
o This turns it into a linear equation where we can find ln(a) and b.
o Then we convert back to the original form by calculating a = e^(ln(a))
3. Verification:
o The program also calculates predicted values using the fitted equations and
compares them with the actual data.

Least Squares Method Explanation


The least squares method is a mathematical approach used to find the best-fitting curve for a
given set of data points by minimizing the sum of the squares of the differences between the
observed values and the values predicted by the model.

The Basic Concept

Imagine you have several data points plotted on a graph, and you want to find a line (or curve)
that best represents their trend. The least squares method determines this "best fit" by:

1. Calculating the vertical distance (residual) between each data point and the proposed
line/curve
2. Squaring each of these distances (to make all values positive and emphasize larger
deviations)
3. Adding all these squared distances together
4. Finding the line/curve parameters that make this sum of squared distances as small as
possible

For a Linear Model (y = a + bx)

When fitting a straight line with equation y = a + bx, we need to find the coefficients a and b that
minimize:
S = Σ(yi - (a + bxi))²

Where:

 (xi, yi) are the data points


 S is the sum of squared residuals

Taking the partial derivatives of S with respect to a and b, setting them to zero, and solving the
resulting system of equations gives us:

1. b = (n∑xiyi - ∑xi∑yi) / (n∑xi² - (∑xi)²)


2. a = (∑yi - b∑xi) / n

Where n is the number of data points.

For an Exponential Model (y = ae^(bx))

For an exponential curve, we use a transformation approach:

1. Take the natural logarithm of both sides: ln(y) = ln(a) + bx


2. Let Y = ln(y) and A = ln(a)
3. This gives us a linear equation: Y = A + bx
4. Apply the linear least squares method to find A and b
5. Calculate a = e^A to get back to the original form

Why Use Least Squares?

The least squares method is popular because:

1. It has a unique, closed-form solution for linear models


2. It's mathematically straightforward to implement
3. It gives more weight to points that are far from the line (due to squaring)
4. Under certain assumptions, it produces the maximum likelihood estimate
5. It works well for many physical and statistical relationships

In essence, the least squares method provides a systematic way to find the curve that minimizes
the overall prediction error across all data points, making it a fundamental tool in data analysis,
curve fitting, and regression modeling.

Algorithm for Least Squares Method

Here's the algorithm for implementing least squares method for both linear and exponential curve
fitting:
Linear Least Squares Algorithm (y = a + bx)

1. Input: Set of n data points (x₁, y₁), (x₂, y₂), ..., (xₙ, yₙ)
2. Compute sums:
o Calculate ∑x = x₁ + x₂ + ... + xₙ
o Calculate ∑y = y₁ + y₂ + ... + yₙ
o Calculate ∑xy = x₁y₁ + x₂y₂ + ... + xₙyₙ
o Calculate ∑x² = x₁² + x₂² + ... + xₙ²
3. Compute coefficients:
o b = (n∑xy - ∑x∑y) / (n∑x² - (∑x)²)
o a = (∑y - b∑x) / n
4. Output: Coefficients a and b for the line y = a + bx

Exponential Least Squares Algorithm (y = ae^(bx))

1. Input: Set of n data points (x₁, y₁), (x₂, y₂), ..., (xₙ, yₙ)
2. Transform data:
o For each point, compute Y₁ = ln(y₁), Y₂ = ln(y₂), ..., Yₙ = ln(yₙ)
o This creates a new dataset (x₁, Y₁), (x₂, Y₂), ..., (xₙ, Yₙ)
3. Apply linear least squares to the transformed data:
o Calculate ∑x = x₁ + x₂ + ... + xₙ
o Calculate ∑Y = Y₁ + Y₂ + ... + Yₙ
o Calculate ∑xY = x₁Y₁ + x₂Y₂ + ... + xₙYₙ
o Calculate ∑x² = x₁² + x₂² + ... + xₙ²
o Compute b = (n∑xY - ∑x∑Y) / (n∑x² - (∑x)²)
o Compute A = (∑Y - b∑x) / n
4. Transform back:
o Calculate a = e^A
5. Output: Coefficients a and b for the curve y = ae^(bx)

Pseudo code Implementation

Here's pseudo code for both algorithms:

function linearLeastSquares(x[], y[], n):


sumX = 0
sumY = 0
sumXY = 0
sumX2 = 0

for i from 0 to n-1:


sumX += x[i]
sumY += y[i]
sumXY += x[i] * y[i]
sumX2 += x[i] * x[i]
b = (n * sumXY - sumX * sumY) / (n * sumX2 - sumX * sumX)
a = (sumY - b * sumX) / n

return [a, b]

function exponentialLeastSquares(x[], y[], n):


// Create transformed data
Y[] = new array of size n

for i from 0 to n-1:


Y[i] = ln(y[i])

// Apply linear least squares to (x, Y)


[A, b] = linearLeastSquares(x, Y, n)

// Transform back
a = exp(A)

return [a, b]

This algorithm forms the mathematical basis for the C program in the previous answer, where we
implemented these exact steps to find the coefficients for both the linear and exponential models.

You might also like